CN113084815B - Physical size calculation method and device of belt-loaded robot and robot - Google Patents

Physical size calculation method and device of belt-loaded robot and robot Download PDF

Info

Publication number
CN113084815B
CN113084815B CN202110400205.9A CN202110400205A CN113084815B CN 113084815 B CN113084815 B CN 113084815B CN 202110400205 A CN202110400205 A CN 202110400205A CN 113084815 B CN113084815 B CN 113084815B
Authority
CN
China
Prior art keywords
robot
target
point cloud
loaded
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110400205.9A
Other languages
Chinese (zh)
Other versions
CN113084815A (en
Inventor
孙锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Noah Wood Robot Technology Co ltd
Original Assignee
Shanghai Zhihuilin Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhihuilin Medical Technology Co ltd filed Critical Shanghai Zhihuilin Medical Technology Co ltd
Priority to CN202110400205.9A priority Critical patent/CN113084815B/en
Publication of CN113084815A publication Critical patent/CN113084815A/en
Application granted granted Critical
Publication of CN113084815B publication Critical patent/CN113084815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The invention provides a physical dimension calculation method and device of a loaded robot and the robot, wherein the method comprises the following steps: acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor; extracting the overall dimension information of the target cargo according to the second point cloud data of the target cargo; and obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot. The invention can lead the robot to identify the overall dimension information of the goods loaded each time in real time, thereby obtaining the overall dimension information of the loaded robot, and the motion control is carried out according to the overall dimension information of the loaded robot, thereby improving the motion safety.

Description

Physical size calculation method and device of belt-loaded robot and robot
Technical Field
The invention relates to the technical field of vision, in particular to a physical dimension calculation method and device of a loaded robot and the robot.
Background
Many logistics robots support the function of automatically loading and unloading containers/shelves. By automatically loading and unloading containers or pallets is meant that the robot can identify the containers to be transported and then transport them away to their destination using either a "piggyback" (fig. 5- (a)) or "piggyback" (fig. 5- (b)) type of pick-and-place, and then automatically unload them.
No matter which mode is adopted, after the empty-load robot loads the containers/goods shelves, the containers/goods shelves and the robot become a new integral robot, namely the robot with the load. Obviously, the shapes of the no-load robot and the loaded robot, such as the length, the width and the height, are changed obviously. The safety of the loaded robot in motion can only be ensured if it is modeled according to its real three-dimensional physical dimensions and used for motion planning and control. For example, the container is not wider than the width of the robot body, so that the container does not collide with surrounding obstacles in movement.
It is common practice to only support standard containers/pallets, and the sizes of these standard containers/pallets are known in advance, and the robot can find the corresponding container/pallet size as long as it can identify which container/pallet is, thereby modifying the overall size information of the loaded robot.
In practice, however, customers often want to be able to utilize their own existing containers/pallets, i.e. to adapt the robot to its existing containers/pallets, rather than to customize the new containers/pallets according to the standards of the robot. In addition, the transported goods may be out of the boundary of the shelf. Thus, it is necessary for the robot to recognize and determine the actual physical dimensions of the containers/racks (including the items on the racks that extend beyond the range of the racks) that are loaded automatically each time to ensure safety.
Disclosure of Invention
One of the objectives of the present invention is to provide a method and an apparatus for calculating a physical dimension of a loaded robot, and a robot, so as to overcome at least one of the disadvantages in the prior art.
The technical scheme provided by the invention is as follows:
a physical dimension calculation method of an on-board robot includes: acquiring second point cloud data of the target cargo through a configured three-dimensional visual sensor; extracting the overall dimension information of the target cargo according to the second point cloud data of the target cargo; and obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot.
Further, after obtaining the external dimension information of the robot after loading, the method comprises the following steps: aligning the robot with the target goods to prepare for loading the goods; and after the target goods are loaded, performing motion control on the robot according to the loaded outline dimension information of the robot.
Further, the acquiring second point cloud data of the target cargo by the configured three-dimensional visual sensor comprises: rotating a circle around a target cargo, and acquiring first point cloud data of the target cargo from multiple positions; and obtaining second point cloud data of the target cargo according to all the first point cloud data.
Further, the aligning the robot with the target cargo comprises: determining a target position of a first connecting surface according to the overall dimension information of the target cargo, wherein the first connecting surface is a contact surface for connecting the target cargo side and the robot; and the robot moves to the target position of the first connecting surface until the target position of a second connecting surface is superposed with the target position of the first connecting surface, wherein the second connecting surface is a contact surface for connecting the robot side and the target goods.
Further, still include: adding visual characteristic information to the target position of the first connection face; the robot moving to the target position of the first connection face includes: and the robot moves to the visual characteristic information through observation of the visual characteristic information.
Further, still include: acquiring the relative pose relationship between the robot and the target cargo according to the observed visual characteristic information; and adjusting the pose of the robot according to the relative pose relation, so that the target position of the second connecting surface is coincided with the target position of the first connecting surface, and the robot and the target goods have no relative rotation offset.
The invention also provides a physical size calculation method of the belt-loaded robot, which comprises the following steps: acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor; obtaining point cloud data of the robot according to the appearance size of the robot; performing joint matching on the second point cloud data of the target goods and the point cloud data of the robot to obtain point cloud data loaded by the robot; and obtaining the contour dimension information of the loaded robot according to the point cloud data of the loaded robot.
The present invention also provides a physical size calculation apparatus for a belt-mounted robot, including: the point cloud acquisition module is used for acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor; the size calculation module is used for extracting the outline size information of the target cargo according to the second point cloud data of the target cargo; and obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot.
The present invention also provides a physical size calculation apparatus for a belt-mounted robot, including: the point cloud acquisition module is used for acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor; obtaining point cloud data of the robot according to the appearance size of the robot; performing joint matching on the second point cloud data of the target goods and the point cloud data of the robot to obtain point cloud data loaded by the robot; and the size calculation module is used for obtaining the appearance size information of the loaded robot according to the point cloud data of the loaded robot.
The present invention also provides a robot comprising: the physical size calculation device of the on-board robot is described above.
The physical size calculation method and device of the loaded robot and the robot provided by the invention at least have the following beneficial effects:
1. no matter the loaded goods are in standard size or nonstandard size, the invention utilizes the 3D vision technology to enable the robot to adaptively measure the outline size information of the loaded goods and the overall physical size of the loaded robot.
2. The invention controls the movement according to the outline dimension information of the loaded robot, and can improve the safety of the loaded robot in the movement.
3. According to the invention, the alignment of the robot and the target goods can be accelerated by adding the visual characteristic information at the target position of the first connecting surface; through the alignment of the robot and the target goods, the calculation accuracy of the overall dimension information of the loaded robot can be ensured.
Drawings
The above features, technical features, advantages and implementations of a method and apparatus for calculating a physical dimension of a loaded robot will be further described in the following preferred embodiments with reference to the accompanying drawings in a clearly understandable manner.
FIG. 1 is a flow chart of one embodiment of a method of calculating a physical dimension of a loaded robot of the present invention;
FIG. 2 is a flow chart of another embodiment of a method of calculating a physical dimension of a loaded robot of the present invention;
FIG. 3 is a schematic structural diagram of an embodiment of a physical dimension calculating apparatus of a loaded robot according to the present invention;
FIG. 4 is a schematic structural diagram of one embodiment of a robot of the present invention;
FIG. 5 is a schematic diagram of the loading pattern of the robot and the containers/pallets;
fig. 6 is a schematic top view of the latent robot aligned with a container/pallet.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically depicted, or only one of them is labeled. In this document, "one" means not only "only one" but also a case of "more than one".
One embodiment of the present invention, as shown in fig. 1, is a method for calculating a physical size of a loaded robot, including:
step S110 is to acquire second point cloud data of the target cargo through the configured three-dimensional visual sensor.
The target cargo is a container or a pallet or the like to be loaded by the robot.
The robot needs to be provided with at least 1 three-dimensional vision sensor. The three-dimensional vision sensor may be a 3D laser scanner, a TOF camera, a structured light camera, and the like. After the spatial coordinates of each sampling point on the surface of the object are obtained, the obtained point set is called as a point cloud; each point may be expressed in three-dimensional coordinates (XYZ). The three-dimensional vision sensor is adopted to scan or shoot the target cargo once, so that point cloud data can be obtained and are called as first point cloud data of the target cargo.
Since it is impossible to obtain a full view of the target cargo through one scan or photograph due to being blocked by the object, the target cargo needs to be scanned or photographed from a plurality of positions. The first point cloud data is obtained by scanning or shooting each time. And obtaining second point cloud data of the target cargo according to the first point cloud data of multiple frames, particularly the splicing between the first point cloud data obtained by adjacent scanning. The second point cloud data is three-dimensional complete point cloud data of the target cargo, and can completely describe the target cargo.
Preferably, the target cargo is rotated by one circle, and first point cloud data of the target cargo is acquired from a plurality of positions through the three-dimensional vision sensor; and obtaining second point cloud data of the target cargo according to all the first point cloud data. For example, in the 360-degree rotation process, one piece of first point cloud data is obtained by shooting every 120 degrees; and obtaining a second point cloud data according to the three first point cloud data.
And step S120, extracting the overall dimension information of the target goods according to the second point cloud data of the target goods.
And processing the second point cloud data, such as edge recognition and extraction, and acquiring the contour information of the target cargo. And determining the outline dimension information of the target cargo according to the outline information and the preset outline dimension definition. For example, for a cube, the dimensions can be defined by length, width, and height; for a cylinder, the profile dimensions can be defined by the base diameter and height. And according to the use requirement, such as motion control of the loading robot, the maximum height, the maximum width and the maximum length of the goods are more concerned, so that the maximum height, the maximum width and the maximum length of the contour of the target goods can be used as the external dimension information, and the method is suitable for the regular objects or the irregular objects.
The point cloud data of the target goods are obtained through the three-dimensional visual sensor, and then the point cloud data are processed to obtain the overall dimension information of the target goods, so that the robot can self-adaptively measure the overall dimension information of the target goods no matter whether the target goods are in standard dimensions or non-standard dimensions.
Step S130 obtains the loaded outline dimension information of the robot according to the outline dimension information of the target cargo and the outline dimension information of the robot itself.
According to the loading mode, the robot can obtain the loaded outline dimension information of the robot according to the outline dimension information of the target goods and the outline dimension information of the robot.
Assuming that the overall dimension information of the target cargo and the overall dimension information of the robot itself are expressed by length, width and height, the overall dimension information after the robot is loaded is expressed by the maximum length, width and height:
if the robot is in a submerging piggyback type, the height of the loaded robot is equal to the sum of the height of the target goods and the height of the robot, the length is the maximum value of the length of the target goods and the length of the robot, and the width is the maximum value of the width of the target goods and the width of the robot. If the robot is in a pulling type, the length of the robot after being loaded is equal to the sum of the two, and the height and the width are respectively the maximum value between the two. Thus, the contour dimension information of the loaded robot can be obtained.
Step S140 is to align the robot with the target cargo to prepare for loading the cargo.
And the contraposition means that the robot is connected with the target goods according to the preset requirement. The preset requirements are typically established based on safety and stability of operation. For the submerged piggyback, this requirement may be: when the robot is connected with the target goods, the center position of the top surface of the robot is matched with the center position of the bottom surface of the target goods. Therefore, the center of gravity of the robot and the center of gravity of the target goods are on the same line, so that the loaded goods are stable in operation and are not easy to incline or fall off in the operation of the robot. For the trailer, this requirement may be: the vertical center line of the rear of the robot coincides with the vertical center line of the front of the target cargo.
The robot is equipped with at least 1 alignment vision sensor, such as a camera. And the robot and the target goods are aligned through the alignment visual sensor. If the robot is a diving piggyback type, the alignment vision sensor is arranged on the top surface of the robot; if the robot is in a pulling type, the alignment vision sensor is arranged at the back of the robot.
According to the overall dimension information of the target goods, the dimension information of the first connecting surface is determined, and the target position of the first connecting surface is further determined, wherein the first connecting surface is a contact surface of the target goods side and the robot for connection. The robot moves to the target position of the first connecting surface until the target position of the second connecting surface is superposed with the target position of the first connecting surface, and the second connecting surface is a contact surface for connecting the robot side and the target goods.
If the cargo is loaded in a submerging piggyback mode, the first connecting surface is the bottom surface of the target cargo, and the target position of the first connecting surface is the center position of the bottom surface of the target cargo; the second connecting surface is the top surface of the robot, and the target position of the second connecting surface is the center position of the top surface of the robot. If the goods are loaded in a pulling type, the first connecting surface is the front of the target goods, and the target position of the first connecting surface is the vertical center line position of the first connecting surface; the second connecting surface is the back of the robot, and the target position of the second connecting surface is the position of a vertical center line of the second connecting surface.
Aiming at the submerging piggyback robot, the bottom plane size of the target cargo can be obtained according to the overall dimension information of the target cargo, and the center position of the bottom surface is determined according to the bottom plane size.
In order to speed up the alignment between the robot and the target cargo, visual feature information may be added to the target position of the first connection surface, such as attaching a two-dimensional code, attaching a label with a preset texture, and the like.
During the movement of the robot to the target position of the first connection surface, the robot is adjusted to approach the visual characteristic information by observing the visual characteristic information, for example, observing the distance and the orientation information between the visual characteristic information and the current position, and adjusting the operation of the robot according to the distance and the orientation information. The visual characteristic information is arranged at the target position of the first connecting surface, so that the visual characteristic information is close to the visual characteristic information, the superposition of the target position of the first connecting surface and the target position of the second connecting surface can be accelerated, and the alignment of the robot and the target goods is accelerated.
Further, acquiring a relative pose relation between the robot and the target cargo according to the observed visual characteristic information; and adjusting the pose of the robot according to the relative pose relation to ensure that the target position of the second connecting surface is superposed with the target position of the first connecting surface, and the robot and the target goods have no relative rotation offset.
And S150, after the target goods are loaded, controlling the motion of the robot according to the loaded overall dimension information of the robot.
In the embodiment, no matter the loaded goods are in standard size or nonstandard size, the robot can self-adaptively measure the overall dimension information of the loaded goods by utilizing a three-dimensional visual sensor and a point cloud data processing technology; through the alignment of the robot and the target goods, the calculation accuracy of the overall dimension information after the robot is loaded can be ensured; the alignment of the robot and the target goods can be accelerated by adding visual characteristic information at the target position of the first connecting surface; the motion control is carried out according to the outline dimension information of the loaded robot, so that the safety of the loaded robot in motion can be improved.
Another embodiment of the present invention, as shown in fig. 2, is a method for calculating a physical size of a loaded robot, including:
step S210 obtains second point cloud data of the target cargo through the configured three-dimensional visual sensor.
The robot needs to be provided with at least 1 three-dimensional vision sensor. The three-dimensional vision sensor is adopted to scan or shoot the target cargo once, so that point cloud data can be obtained and are called as first point cloud data of the target cargo.
Preferably, the target cargo is rotated by one circle, and first point cloud data of the target cargo is acquired from a plurality of positions through the three-dimensional vision sensor; and obtaining second point cloud data of the target cargo according to all the first point cloud data.
The first point cloud data and the second point cloud data are both referred to by a cargo body coordinate system. The cargo body coordinate system is a three-dimensional rectangular coordinate system constructed by taking a certain point of the target cargo as a coordinate origin, preferably, the alignment detection point is taken as the origin, and the alignment detection point is used for aligning the robot and the target cargo. If the loading mode is a piggyback type, it is preferable to use the bottom center position of the target cargo as the origin.
Step S220, point cloud data of the robot is obtained according to the appearance size of the robot.
The robot constructs point cloud data of the robot according to the appearance size of the robot. The point cloud data of the robot is referenced to the robot body coordinate system. The robot body coordinate system is a three-dimensional rectangular coordinate system constructed with a certain point of the robot as the origin of coordinates, and preferably, the registered detection point is the origin. If the loading mode is a piggyback mode, it is preferable to use the center position of the top surface of the robot as the origin.
Step S230 performs joint matching on the second point cloud data of the target cargo and the point cloud data of the robot to obtain point cloud data loaded by the robot.
Assuming that the origin of the cargo body coordinate system referenced by the second point cloud data and the robot body coordinate system referenced by the point cloud data of the robot are both the origin of the coordinate system with the contraposition point, and the X \ Y \ Z axes of the two coordinate systems are parallel to each other, after the robot and the target cargo are aligned (i.e. the contraposition points coincide), the two coordinate systems completely coincide. In this case, the joint matching of the second point cloud data of the target good and the point cloud data of the robot may be simple.
If at least one of the two coordinate systems does not use the opposite point as the origin, the point cloud data needs to be converted into the coordinate system using the opposite point as the origin, and then the joint matching is performed.
And S240, obtaining the external dimension information of the loaded robot according to the point cloud data of the loaded robot.
And processing the point cloud data loaded by the robot, such as edge recognition and extraction, and acquiring the contour information of the loaded robot. Based on the contour information, the contour dimension information of the loaded robot (i.e., the contour dimension information after the robot is loaded) is determined according to the preset contour dimension definition.
And acquiring second point cloud data of the target goods through the three-dimensional visual sensor, and performing joint matching on the point cloud data of the target goods and the point cloud data of the robot to obtain the point cloud data loaded by the robot. And processing the point cloud data after the robot is loaded to obtain the external dimension information of the robot after the robot is loaded, so that the robot can adaptively measure the external dimension information of the robot after the robot is loaded no matter the target goods are in standard dimensions or nonstandard dimensions.
Step S250, the robot aligns with the target cargo to prepare for loading the cargo.
And the contraposition means that the robot is connected with the target goods according to the preset requirement. The robot is equipped with at least 1 alignment vision sensor. And the robot and the target goods are aligned through the alignment visual sensor. If the robot is a diving piggyback type, the alignment vision sensor is arranged on the top surface of the robot; if the robot is in a pulling type, the alignment vision sensor is arranged at the back of the robot.
According to the overall dimension information of the target goods, the dimension information of the first connecting surface is determined, and the target position of the first connecting surface is further determined, wherein the first connecting surface is a contact surface of the target goods side and the robot for connection. The robot moves to the target position of the first connecting surface until the target position of the second connecting surface is superposed with the target position of the first connecting surface, and the second connecting surface is a contact surface for connecting the robot side and the target goods.
If the cargo is loaded in a submerging piggyback mode, the first connecting surface is the bottom surface of the target cargo, and the target position of the first connecting surface is the center position of the bottom surface of the target cargo; the second connecting surface is the top surface of the robot, and the target position of the second connecting surface is the center position of the top surface of the robot. If the goods are loaded in a pulling type, the first connecting surface is the front of the target goods, and the target position of the first connecting surface is the vertical center line position of the first connecting surface; the second connecting surface is the back of the robot, and the target position of the second connecting surface is the position of a vertical center line of the second connecting surface.
In order to speed up the alignment between the robot and the target cargo, visual feature information may be added to the target position of the first connection surface, such as attaching a two-dimensional code, attaching a label with a preset texture, and the like.
When the robot moves to the target position of the first connection surface, the robot approaches the visual characteristic information by observing the visual characteristic information. The visual characteristic information is arranged at the target position of the first connecting surface, so that the visual characteristic information is close to the visual characteristic information, the superposition of the target position of the first connecting surface and the target position of the second connecting surface can be accelerated, and the alignment of the robot and the target goods is accelerated.
And step S260, after the target goods are loaded, the robot is subjected to motion control according to the loaded outline dimension information of the robot.
In this embodiment, another method is provided for calculating the external dimension information of the loaded robot, that is, the point cloud data of the loaded robot is obtained first, and then the external dimension information of the loaded robot is obtained according to the point cloud data of the loaded robot.
One embodiment of the present invention, as shown in fig. 3, is a physical size calculation apparatus 100 with a robot, including:
and the point cloud obtaining module 110 is configured to obtain second point cloud data of the target cargo through the configured three-dimensional visual sensor.
The robot needs to be provided with at least 1 three-dimensional vision sensor. The three-dimensional vision sensor may be a 3D laser scanner, a TOF camera, a structured light camera, and the like. The three-dimensional vision sensor is adopted to scan or shoot the target cargo once, so that point cloud data can be obtained and are called as first point cloud data of the target cargo.
Since it is impossible to obtain a full view of the target cargo through one scan or photograph due to being blocked by the object, the target cargo needs to be scanned or photographed from a plurality of positions. The first point cloud data is obtained by scanning or shooting each time. And obtaining second point cloud data of the target cargo according to the first point cloud data of multiple frames, particularly the splicing between the first point cloud data obtained by adjacent scanning. The second point cloud data is three-dimensional complete point cloud data of the target cargo, and can completely describe the target cargo.
Preferably, the point cloud obtaining module 110 is further configured to rotate one circle around the target cargo to obtain first point cloud data of the target cargo from a plurality of positions; and obtaining second point cloud data of the target cargo according to all the first point cloud data.
A size calculation module 120, configured to extract outline size information of the target cargo according to the second point cloud data of the target cargo; and obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot.
And the alignment module 130 is used for aligning the robot and the target goods to prepare for loading the goods.
And the contraposition means that the robot is connected with the target goods according to the preset requirement. The robot is equipped with at least 1 alignment vision sensor. And the robot and the target goods are aligned through the alignment visual sensor. If the robot is a diving piggyback type, the alignment vision sensor is arranged on the top surface of the robot; if the robot is in a pulling type, the alignment vision sensor is arranged at the back of the robot.
According to the overall dimension information of the target goods, the dimension information of the first connecting surface is determined, and the target position of the first connecting surface is further determined, wherein the first connecting surface is a contact surface of the target goods side and the robot for connection. The robot moves to the target position of the first connecting surface until the target position of the second connecting surface is superposed with the target position of the first connecting surface, and the second connecting surface is a contact surface for connecting the robot side and the target goods.
In order to speed up the alignment between the robot and the target cargo, visual feature information may be added to the target position of the first connection surface, such as attaching a two-dimensional code, attaching a label with a preset texture, and the like.
Further, the alignment module 130 is further configured to obtain a relative pose relationship between the robot and the target cargo according to the observed visual feature information; and adjusting the pose of the robot according to the relative pose relation to ensure that the target position of the second connecting surface is superposed with the target position of the first connecting surface, and the robot and the target goods have no relative rotation offset.
And the motion control module 140 is used for controlling the motion of the robot according to the external dimension information of the loaded robot after the target goods are loaded.
In the embodiment, no matter the loaded goods are in standard size or nonstandard size, the robot can self-adaptively measure the overall dimension information of the loaded goods by utilizing a three-dimensional visual sensor and a point cloud data processing technology; through the alignment of the robot and the target goods, the calculation accuracy of the overall dimension information after the robot is loaded can be ensured; the alignment of the robot and the target goods can be accelerated by adding visual characteristic information at the target position of the first connecting surface; the motion control is carried out according to the outline dimension information of the loaded robot, so that the safety of the loaded robot in motion can be improved.
Another embodiment of the present invention, as shown in fig. 3, is a physical size calculation apparatus 100 with a robot, including:
the point cloud obtaining module 110 is configured to obtain second point cloud data of the target cargo through the configured three-dimensional visual sensor; obtaining point cloud data of the robot according to the appearance size of the robot; and performing joint matching on the second point cloud data of the target goods and the point cloud data of the robot to obtain the point cloud data loaded by the robot.
And the size calculation module 120 is configured to obtain the contour size information of the loaded robot according to the point cloud data of the loaded robot.
And the alignment module 130 is used for aligning the robot and the target goods to prepare for loading the goods.
And the motion control module 140 is used for controlling the motion of the robot according to the external dimension information of the loaded robot after the target goods are loaded.
In the embodiment, another method is provided for calculating the overall dimension information of the loaded robot.
One embodiment of the present invention, as shown in fig. 4, is a robot including the physical size calculation apparatus 100 of the aforementioned loaded robot.
The robot can predict the external dimension information of the loaded robot by the physical dimension calculating device 100 of the loaded robot each time the robot loads goods, and then perform motion control according to the external dimension information of the loaded robot, thereby improving the safety of the loaded robot in motion.
The invention also provides a specific implementation scene example, and the method and the device for calculating the physical dimension of the loaded robot are applied to the logistics robot, and the specific scheme is as follows:
1) and carrying out point cloud conversion on the standard size of the robot to determine the origin of a robot coordinate system. For simplicity of calculation, the origin of the coordinate system may be defined as the detection point of the alignment between the robot and the container/shelf, and in the case of a hidden robot, the center point of the top surface of the robot may be the origin of the coordinate system.
2) The robot needs to have at least 1 3D vision sensor (e.g., 3D laser, TOF camera, structured light camera, etc.).
The 3D vision sensor needs to perform two main functions: the physical dimensions of the container/pallet are measured and calculated. The specific implementation process can be as follows:
A) after the robot approaches the loading place, the robot firstly rotates around the container/goods shelf for one circle, so that the 3D vision sensor can completely obtain the point cloud data of the container/goods shelf.
B) And processing (such as edge extraction and the like) and calculating the point cloud data to obtain the overall dimension data of the container/shelf. One of the purposes of obtaining the overall dimension data is to find the bottom plane of the shelf, so as to calculate the shelf coordinate system, for example, the center point of the bottom surface is taken as the coordinate origin o, the line passing through o point and parallel to the long edge is taken as the X axis, and the line passing through o point and parallel to the short edge is taken as the Y axis.
C) To improve the processing efficiency and accuracy (mainly filtering noise and accurately extracting edges) of the B), some visual features (such as two-dimensional code pasting or texture with special response to a 3D sensor) can be added to the edges of the container/shelf during engineering implementation.
3) The robot carries out the counterpoint with packing box/goods shelves, prepares to connect: if the robot is a latent robot, the robot is ready to start jacking; if the robot is a pulling type robot, the hook is ready to be connected.
For the latent robot example, the general alignment process is as follows: a camera is arranged at a certain position on the top surface of the robot, and a visual feature (such as a two-dimensional code) is arranged at the bottom of the goods shelf. And adjusting the pose of the robot by the observation value of the camera to the two-dimensional code to complete alignment.
To simplify the calculation, the two-dimensional code is typically pre-affixed to the center of the bottom plane of the container/pallet. The robot observes the two-dimensional code through the camera, and can know the relative position relation of the robot and the packing box/goods shelf. The robot can adjust the self pose, so that the original point of a robot coordinate system is completely overlapped with the center of the two-dimensional code of the goods shelf, and the goods shelf/goods shelf and the robot do not have relative rotation offset. As shown in fig. 6, the left view is a view showing a relative rotation angle between the robot 2 and the container/pallet 1, and the right view is a view showing a case where the robot 2 and the container/pallet 1 are not offset in relative rotation.
4) After completion of 3) above, the relative position of the robot to the container/pallet is determined. At the moment, joint matching is carried out on the container/goods shelf point cloud measured in the step 1) and the robot model point cloud, so that the point cloud of the loaded robot is easily obtained. And obtaining the outline of the loaded robot according to the point cloud, and carrying out route planning, collision detection and the like on the point cloud based on the loaded robot before the robot unloads, so that the safety of the transportation process is improved.
For example, assuming that the origin of the rack coordinate system is the bottom center position, the X, Y coordinate axis direction coincides with the X, Y coordinate axis direction of the robot coordinate system, and the origin of the robot coordinate system is the alignment detection point between the robot and the container/rack.
Obviously, by 3) above, the origin of the robot coordinate system completely coincides with the shelf two-dimensional code center, and the container/shelf and the robot have no relative rotational offset. At the moment, the robot coordinate system and the goods shelf coordinate system are completely overlapped, and the point cloud joint matching is very simple.
If the process robot of 3) above is not aligned exactly, i.e. the origin of the robot coordinate system does not coincide exactly with the center of the pallet two-dimensional code, or the container/pallet is offset in rotation relative to the robot. The joint matching process of the point clouds at this time is slightly more complicated: the two-dimensional code is observed by a camera on the robot, so that the offset X0 and Y0 of the origin of the shelf coordinate system relative to the origin of the robot coordinate system and the rotation angle theta of the shelf coordinate axis relative to the robot coordinate axis are obtained, and the conversion relation of the two coordinate systems can be calculated by a conversion relation formula of the plane coordinate system. Since the shelf bottom surface completely overlaps the top surface of the robot, the Z-axis coordinate values are completely the same, and therefore, the X, Y coordinate conversion relationship between the two coordinate systems can be obtained.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A physical dimension calculation method of a belt-loaded robot is characterized by comprising the following steps:
acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor;
extracting the overall dimension information of the target cargo according to the second point cloud data of the target cargo;
obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot;
aligning the robot with the target goods to prepare for loading the goods;
after the target goods are loaded, the robot is subjected to motion control according to the loaded outline dimension information of the robot;
wherein, the robot and the target goods carry out counterpoint including:
determining a target position of a first connecting surface according to the overall dimension information of the target cargo, wherein the first connecting surface is a contact surface for connecting the target cargo side and the robot;
and the robot moves to the target position of the first connecting surface until the target position of a second connecting surface is superposed with the target position of the first connecting surface, wherein the second connecting surface is a contact surface for connecting the robot side and the target goods.
2. The method of claim 1, wherein the obtaining second point cloud data of the target cargo via the configured three-dimensional vision sensor comprises:
rotating a target cargo for one circle, and acquiring first point cloud data of the target cargo from a plurality of positions;
and obtaining second point cloud data of the target cargo according to all the first point cloud data.
3. The computing method of claim 1, further comprising:
adding visual characteristic information to the target position of the first connection face;
the robot moving to the target position of the first connection face includes: and the robot moves to the visual characteristic information through observation of the visual characteristic information.
4. The computing method of claim 3, further comprising:
acquiring the relative pose relationship between the robot and the target cargo according to the observed visual characteristic information;
and adjusting the pose of the robot according to the relative pose relation, so that the target position of the second connecting surface is coincident with the target position of the first connecting surface, and the robot and the target goods have no relative rotation offset.
5. A physical dimension calculation method of a belt-loaded robot is characterized by comprising the following steps:
acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor;
obtaining point cloud data of the robot according to the appearance size of the robot;
performing joint matching on the second point cloud data of the target goods and the point cloud data of the robot to obtain point cloud data loaded by the robot;
obtaining the external dimension information of the loaded robot according to the point cloud data of the loaded robot;
aligning the robot with the target goods to prepare for loading the goods;
after the target goods are loaded, the robot is subjected to motion control according to the loaded outline dimension information of the robot;
wherein, the robot and the target goods carry out counterpoint including:
determining a target position of a first connecting surface according to the overall dimension information of the target cargo, wherein the first connecting surface is a contact surface for connecting the target cargo side and the robot; the overall dimension information of the target cargo is obtained according to the second point cloud data of the target cargo;
and the robot moves to the target position of the first connecting surface until the target position of a second connecting surface is superposed with the target position of the first connecting surface, wherein the second connecting surface is a contact surface for connecting the robot side and the target goods.
6. A physical size calculation apparatus for an on-board robot, comprising:
the point cloud acquisition module is used for acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor;
the size calculation module is used for extracting the outline size information of the target cargo according to the second point cloud data of the target cargo; obtaining the loaded overall dimension information of the robot according to the overall dimension information of the target goods and the overall dimension information of the robot;
the alignment module is used for aligning the robot and the target goods to prepare for loading the goods;
the motion control module is used for controlling the motion of the robot according to the external dimension information of the loaded robot after the target goods are loaded;
the alignment module is further configured to determine a target position of a first connection surface according to the overall dimension information of the target cargo, where the first connection surface is a contact surface between the target cargo side and the robot; and the robot moves to the target position of the first connecting surface until the target position of a second connecting surface is superposed with the target position of the first connecting surface, wherein the second connecting surface is a contact surface for connecting the robot side and the target goods.
7. A physical size calculation apparatus for an on-board robot, comprising:
the point cloud acquisition module is used for acquiring second point cloud data of the target cargo through the configured three-dimensional visual sensor; obtaining point cloud data of the robot according to the appearance size of the robot; performing joint matching on the second point cloud data of the target goods and the point cloud data of the robot to obtain point cloud data loaded by the robot;
the size calculation module is used for obtaining the appearance size information of the loaded robot according to the point cloud data of the loaded robot;
the alignment module is used for aligning the robot and the target goods to prepare for loading the goods;
the motion control module is used for controlling the motion of the robot according to the external dimension information of the loaded robot after the target goods are loaded;
the alignment module is further configured to determine a target position of a first connection surface according to the overall dimension information of the target cargo, where the first connection surface is a contact surface where the target cargo side is connected with the robot; the overall dimension information of the target cargo is obtained according to the second point cloud data of the target cargo; and the robot moves to the target position of the first connecting surface until the target position of a second connecting surface is superposed with the target position of the first connecting surface, wherein the second connecting surface is a contact surface for connecting the robot side and the target goods.
8. A robot characterized by comprising the physical size calculation apparatus of a loaded robot as claimed in claim 6 or 7.
CN202110400205.9A 2021-04-14 2021-04-14 Physical size calculation method and device of belt-loaded robot and robot Active CN113084815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400205.9A CN113084815B (en) 2021-04-14 2021-04-14 Physical size calculation method and device of belt-loaded robot and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400205.9A CN113084815B (en) 2021-04-14 2021-04-14 Physical size calculation method and device of belt-loaded robot and robot

Publications (2)

Publication Number Publication Date
CN113084815A CN113084815A (en) 2021-07-09
CN113084815B true CN113084815B (en) 2022-05-17

Family

ID=76677584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400205.9A Active CN113084815B (en) 2021-04-14 2021-04-14 Physical size calculation method and device of belt-loaded robot and robot

Country Status (1)

Country Link
CN (1) CN113084815B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980512A (en) * 2012-08-29 2013-03-20 武汉武大卓越科技有限责任公司 Fixed type automatic volume measurement system and measuring method thereof
CN107037438A (en) * 2016-02-04 2017-08-11 梅特勒-托莱多有限公司 The apparatus and method of the size of the object carried for the vehicle for determining to move in measured zone
KR20170126842A (en) * 2017-11-13 2017-11-20 현대자동차주식회사 Vehicle and control method for the vehicle
EP3516543A1 (en) * 2016-09-20 2019-07-31 Renault S.A.S. Method for determining an overall dimension of a vehicle equipped with an external load
CN210526689U (en) * 2019-07-30 2020-05-15 中北大学 Transport cooperation robot
CN111336959A (en) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 Truck cargo volume processing method and device, equipment and computer readable medium
CN112325794A (en) * 2020-10-12 2021-02-05 武汉万集信息技术有限公司 Method, device and system for determining overall dimension of vehicle
CN112406887A (en) * 2020-11-25 2021-02-26 北京经纬恒润科技股份有限公司 Method and system for acquiring center of mass position of towing trailer

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108238460B (en) * 2018-01-09 2020-04-21 富通集团(嘉善)通信技术有限公司 Automatic loading system and method for optical cable
US20190287262A1 (en) * 2018-03-19 2019-09-19 Delphi Technologies, Llc System and method to determine size of vehicle carrying cargo
US11585934B2 (en) * 2019-04-30 2023-02-21 Lg Electronics Inc. Cart robot having auto-follow function
CN212197049U (en) * 2019-09-30 2020-12-22 深圳市海柔创新科技有限公司 Conveying device and conveying robot
CN111846810B (en) * 2020-07-17 2022-09-30 坎德拉(深圳)科技创新有限公司 Distribution robot, automatic distribution method, robot system, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102980512A (en) * 2012-08-29 2013-03-20 武汉武大卓越科技有限责任公司 Fixed type automatic volume measurement system and measuring method thereof
CN107037438A (en) * 2016-02-04 2017-08-11 梅特勒-托莱多有限公司 The apparatus and method of the size of the object carried for the vehicle for determining to move in measured zone
EP3516543A1 (en) * 2016-09-20 2019-07-31 Renault S.A.S. Method for determining an overall dimension of a vehicle equipped with an external load
KR20170126842A (en) * 2017-11-13 2017-11-20 현대자동차주식회사 Vehicle and control method for the vehicle
CN111336959A (en) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 Truck cargo volume processing method and device, equipment and computer readable medium
CN210526689U (en) * 2019-07-30 2020-05-15 中北大学 Transport cooperation robot
CN112325794A (en) * 2020-10-12 2021-02-05 武汉万集信息技术有限公司 Method, device and system for determining overall dimension of vehicle
CN112406887A (en) * 2020-11-25 2021-02-26 北京经纬恒润科技股份有限公司 Method and system for acquiring center of mass position of towing trailer

Also Published As

Publication number Publication date
CN113084815A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN110383334B (en) Method, system and device for segmenting an object
CN109844807B (en) Method, system and apparatus for segmenting and sizing objects
US10451405B2 (en) Dimensioning system for, and method of, dimensioning freight in motion along an unconstrained path in a venue
US10721451B2 (en) Arrangement for, and method of, loading freight into a shipping container
US9707682B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
US10290115B2 (en) Device and method for determining the volume of an object moved by an industrial truck
US20170150129A1 (en) Dimensioning Apparatus and Method
JP2017191605A (en) Method and system for measuring dimensions of target object
CN111328408B (en) Shape information generating device, control device, loading/unloading device, distribution system, program, and control method
WO2016033451A1 (en) Stationary dimensioning apparatus
CN111609801B (en) Multi-size workpiece thickness measuring method and system based on machine vision
KR101095579B1 (en) A method for positioning and orienting of a pallet based on monocular vision
Bellandi et al. Roboscan: a combined 2D and 3D vision system for improved speed and flexibility in pick-and-place operation
CN115582827A (en) Unloading robot grabbing method based on 2D and 3D visual positioning
CN114170521B (en) Forklift pallet butt joint identification positioning method
CN113483664B (en) Screen plate automatic feeding system and method based on line structured light vision
CN113084815B (en) Physical size calculation method and device of belt-loaded robot and robot
US20220230339A1 (en) System and method for automatic container configuration using fiducial markers
CN116425088B (en) Cargo carrying method, device and robot
CN112581519A (en) Method and device for identifying and positioning radioactive waste bag
CN117011362A (en) Method for calculating cargo volume and method for dynamically calculating volume rate
Prasse et al. New approaches for singularization in logistic applications using low cost 3D sensors
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
Kita et al. Detection and localization of pallets on shelves using a wide-angle camera
CN115346211A (en) Visual recognition method, visual recognition system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 202150 room 205, zone W, second floor, building 3, No. 8, Xiushan Road, Chengqiao Town, Chongming District, Shanghai (Shanghai Chongming Industrial Park)

Patentee after: Shanghai Noah Wood Robot Technology Co.,Ltd.

Address before: 200335 402 rooms, No. 33, No. 33, Guang Shun Road, Shanghai

Patentee before: Shanghai zhihuilin Medical Technology Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A physical size calculation method and device for a loaded robot, robot

Effective date of registration: 20230627

Granted publication date: 20220517

Pledgee: Shanghai Rural Commercial Bank Co.,Ltd. Pudong branch

Pledgor: Shanghai Noah Wood Robot Technology Co.,Ltd.

Registration number: Y2023310000307