AU2021266204A1 - Mobile robot pose correction method and system for recognizing dynamic pallet terminals - Google Patents

Mobile robot pose correction method and system for recognizing dynamic pallet terminals Download PDF

Info

Publication number
AU2021266204A1
AU2021266204A1 AU2021266204A AU2021266204A AU2021266204A1 AU 2021266204 A1 AU2021266204 A1 AU 2021266204A1 AU 2021266204 A AU2021266204 A AU 2021266204A AU 2021266204 A AU2021266204 A AU 2021266204A AU 2021266204 A1 AU2021266204 A1 AU 2021266204A1
Authority
AU
Australia
Prior art keywords
pallet
point
mobile robot
point cloud
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2021266204A
Other versions
AU2021266204B2 (en
Inventor
Xinbiao Gao
Panling HUANG
Kai Song
Di Wu
Xuhao Yang
Yifan Zhao
Huazhang Zhou
Jun Zhou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Original Assignee
Shandong University
Shandong Alesmart Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University, Shandong Alesmart Intelligent Technology Co Ltd filed Critical Shandong University
Publication of AU2021266204A1 publication Critical patent/AU2021266204A1/en
Application granted granted Critical
Publication of AU2021266204B2 publication Critical patent/AU2021266204B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P6/00Restoring or reconditioning objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25BTOOLS OR BENCH DEVICES NOT OTHERWISE PROVIDED FOR, FOR FASTENING, CONNECTING, DISENGAGING OR HOLDING
    • B25B27/00Hand tools, specially adapted for fitting together or separating parts or objects whether or not involving some deformation, not otherwise provided for
    • B25B27/02Hand tools, specially adapted for fitting together or separating parts or objects whether or not involving some deformation, not otherwise provided for for connecting objects by press fit or detaching same

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a mobile robot pose correction method system for recognizing dynamic pallet terminals. The method includes: obtaining the laser point cloud cluster data of a pallet, and processing the obtained laser point cloud cluster data to obtain pallet leg feature mean characterization points; obtaining a point cloud distribution score and a point cloud shape score according to all the pallet leg feature mean characterization points, and obtaining pallet leg feature point coordinates according to a product of the two scores; and performing pose correction on a mobile robot according to the obtained pallet leg feature point coordinates. The present disclosure overcomes the defect of poor application adaptability due to the inability to correct the pose of a mobile robot in real time resulting from the pose change in a dynamic pallet in the prior art, realizes highly flexible and highly adaptable correction of the pose of the mobile robot, and ensures the efficiency and safety of the operation of the robot. Drawings I|--------------------------------------------------------------------------1 2 5 3 Fig. 9 5/5

Description

Drawings
I|--------------------------------------------------------------------------1
2 5
3
Fig. 9
5/5
Description
MOBILE ROBOT POSE CORRECTION METHOD AND SYSTEM FOR RECOGNIZING DYNAMIC PALLET TERMINALS
Field of the Invention
The present disclosure relates to the technical field of robot pose control, in particular to a mobile
robot pose correction method and system for recognizing dynamic pallet terminals.
Background of the Invention
The statements in this section merely provide a background art related to the present disclosure, but
do not necessarily constitute the prior art.
With the rapid development of computer technology and sensor technology, mobile robots have
gradually become widely used, especially in the logistics industry. When performing
logistics-related operations, a knapsack or forklift mobile robot uses a pallet as a communication
medium with goods, the robot uses a jacking device to lift the pallet to complete the transportation
of a specified path and the loading and unloading of a specified station, and the robots plays an
important role in accurate loading and unloading of the goods, and intelligent carrying.
A traditional pallet placement position is a known and static point in the path of the mobile robot,
which in turn has higher accuracy requirements for the placement pose of the pallet, once a
relatively large error is generated in the pose of the pallet, it is prone to safety issues of collisions
between cars and objects, and a greater impact is generated on the efficiency of the entire
production line. For the pose change of a dynamic pallet terminal, the traditional method cannot
perform an adaptive pose correction action of the mobile robot with the dynamic change of the
pallet, such that the system lacks a certain degree of flexibility and intelligence.
The inventor found that the prerequisite for the application of a mobile robot pose correction system
is to accurately recognize the pallet and determine its position. The existing pallet recognition
methods include: the method of using the RFID technology is simple and reliable, but has the
problems of lower positioning accuracy and flexibility; the method of using visual target
recognition is highly flexible and accurate, but it is vulnerable to environmental light interference
and has worse environmental adaptability; the method of using 2D laser radar recognition has
stronger environmental adaptability and higher position accuracy, but it is necessary to establish a
Description
pallet recognition template in advance, which has worse adaptability to different shelves in different
working scenarios, and the data size of 2D laser point clouds is relatively small, thus being prone to
the phenomenon of false detection; and the method of using 3D laser radar recognition has stronger
adaptability to scenarios and environments, and higher accuracy, but the sensor equipment is more
expensive.
Summary of the Invention
In order to solve the shortcomings of the prior art, the present disclosure provides a mobile robot
pose correction method and system for recognizing dynamic pallet terminals, which overcome the
defect of poor application adaptability due to the inability to correct the pose of a mobile robot in
real time resulting from the pose change in a dynamic pallet in the prior art, realize highly flexible
and highly adaptable correction of the pose of the mobile robot, and ensure the efficiency and safety
of the operation of the robot.
In order to achieve the above objectives, the present disclosure adopts the following technical
solutions:
The first aspect of the present disclosure provides a mobile robot pose correction method for
recognizing dynamic pallet terminals.
A mobile robot pose correction method for recognizing dynamic pallet terminals, including the
following processes:
obtaining the laser point cloud cluster data of a pallet, and processing the obtained laser point cloud
cluster data to obtain pallet leg feature mean characterization points;
obtaining a point cloud distribution score and a point cloud shape score according to all the pallet
leg feature mean characterization points, and obtaining pallet leg feature point coordinates
according to a product of the two scores; and
performing pose correction on a mobile robot according to the obtained pallet leg feature point
coordinates.
As an optional embodiment, the step of obtaining the laser point cloud cluster data of the pallet, and
processing the obtained laser point cloud cluster data includes the following processes:
after the mobile robot stops moving, obtaining at least one frame of scanning point cloud data of a
laser radar to serve as original point cloud data;
Description
eliminating and screening the original point cloud data;
performing feature point classification on the point cloud data after the eliminating and screening
processing, classifying point clouds into line feature points and corner feature points, eliminating
the line feature points, and dividing the corner feature points into breakpoints and corner points;
performing voxel filtering on the breakpoints;
performing windowed pallet feature filtering on the basis of the breakpoints; and
completing clustering search according to a preset pallet search box.
Further, the step of eliminating and screening the original point cloud data includes the following
processes:
eliminating empty points, limiting a rectangular range of a middle distance in front of the laser radar
as a pallet placement area, screening out the point clouds existing in this rectangular range in the
original data to serve as input information for recognition and detection, eliminating the remaining
unscreened data, and keeping the order of the original point cloud data unchanged during the
process.
Further, the step of performing feature point classification on the point cloud data after the
eliminating and screening processing includes the following processes:
classifying the point clouds into points with line features and corner features by establishing a
curvature threshold, traversing the point clouds to give classification information and eliminate the
line feature points, establishing a distribution variance threshold and a curvature threshold for the
screened out corner feature points, and dividing the corner feature points into breakpoints and
corner points again.
Further, the step of performing voxel filtering on the breakpoints includes the following processes:
performing filtering on the point clouds belonging to the same breakpoint, and selecting the
breakpoint with the minimum serial number within a discontinuous point cloud range as breakpoint
characterization.
Further, the step of performing windowed pallet feature filtering on the basis of the breakpoints
includes the following processes:
for the breakpoints obtained by filtering, sequentially selecting and accessing whether there are at
least two breakpoints within a rectangular range, storing breakpoint linked lists that meet the
requirements in a queue, and eliminating the unsatisfied points.
Description
Further, the step of completing clustering search according to the preset pallet search box includes
the following steps:
taking out the breakpoint linked lists in the queue in sequence, calculating a mean of all objects in
the breakpoint linked lists, and using the mean as the central point of the pallet search box, so as to
construct the pallet search box; and
performing a clustering operation on all point clouds in the search box, classifying the point clouds
into linear feature points, pallet leg feature points and invalid points again in combination with the
obtained curvature threshold, and storing the pallet leg feature mean characterization points in all
search boxes.
As an optional embodiment, a circular range is constructed with the obtained pallet leg feature point
coordinates as the central point, and the point cloud distribution score is calculated, and is
specifically the ratio of the number of the same type of clustered point clouds falling in the circular
range to the number of the same type of clustered point clouds.
As an optional embodiment, the acquisition of the point cloud shape score includes:
performing selection and calculation on all the pallet leg feature mean characterization points,
circularly selecting three points in the point clouds, and calculating slope values of connecting lines
of the three characterization points; and
calculating the degree of a right angle between the connecting lines according to the obtained slopes
of the connecting lines, and calculating the pallet leg point cloud shape score through the included
angles among the connecting lines of the three points.
As an optional embodiment, after the two scores are obtained, the pallet leg point cloud shape score
is multiplied by all the pallet leg point cloud distribution scores that constitute a quadrilateral shape
of the pallet to obtain a total score, and the group of point clouds with the highest score is used as
the pallet leg feature point coordinates.
As an optional embodiment, whether there is data meeting the feature information of a reflector is
queried from point cloud reflectance data in a detection frame, if so, and a high reflectance
characterization point is located at the front end of a total score queue, the priority of the
characterization point linked list with the information of the reflector is updated, and the first pallet
characterization point linked list in the total score queue is output after the adjustment is completed.
As an optional embodiment, the step of performing pose correction on the mobile robot according
Description
to the obtained pallet leg feature point coordinates includes the following processes:
according to the obtained pallet leg feature point coordinates, sorting the headstock orientation of
the mobile robot from small to large, and calculating a virtual cut-in cap point and a virtual
termination cap point of the mobile robot;
adjusting the posture of the mobile robot to the virtual cut-in cap point, constructing a virtual path
to cut into the lower side of the pallet, and when the position reaches the virtual termination cap
point, controlling a top end camera module of the mobile robot to recognize an identification code
at the bottom of the pallet; and
after the mobile robot operates normally, returning to a normal operation path of the mobile robot
according to the cut-in path, and deleting the virtual cut-in cap point and the virtual termination cap
point.
The second aspect of the present disclosure provides a mobile robot pose correction system for
recognizing dynamic pallet terminals.
A mobile robot pose correction system for recognizing dynamic pallet terminals, including:
a storage pallet terminal recognition and detection module, configured to: obtain the laser point
cloud cluster data of a pallet, and process the obtained laser point cloud cluster data to obtain pallet
leg feature mean characterization points;
a pallet credibility self-evaluation module, configured to obtain a point cloud distribution score and
a point cloud shape score according to all the pallet leg feature mean characterization points, and
obtain pallet leg feature point coordinates according to a product of the two scores; and
a mobile robot pose correction module, configured to perform pose correction on a mobile robot
according to the obtained pallet leg feature point coordinates.
The third aspect of the present disclosure provides a computer readable storage medium on which a
program is stored, and when executed by a processor, the program implements the steps in the
mobile robot pose correction method for recognizing dynamic pallet terminals in the first aspect of
the present disclosure.
The fourth aspect of the present disclosure provides an electronic device, including a memory, a
processor, and a program stored in the memory and capable of running on the processor, wherein
when the processor executes the program, the program implements the steps in the mobile robot
pose correction method for recognizing dynamic pallet terminals in the first aspect of the present
Description
disclosure.
Compared with the prior art, the present disclosure has beneficial effects as follows:
1. In the method, the system, the medium or the electronic device described in the present
disclosure, coupling processing is performed on machine learning and feature extraction of
scanning data of the laser radar, the laser point clouds are divided into linear features, corner
features and breakpoint features according to the obtained category and structure information, and
an approximate point cloud of the pallet is determined in the form of a sliding window by means of
the features of the pallet and the structural environment, as well as the clustering information in the
search box. Compared with the recognition of a shelf by the traditional laser point cloud, the defect
of building a pallet length template is removed. Compared with the traditional form of RFID, this
solution can intelligently detect the pallet at any dynamically changing position, and no mark point
needs to be set in the path, such that the accuracy is higher, and the degree of flexibility in terminal
operations is higher.
2. In the method, the system, the medium or the electronic device described in the present
disclosure, a pallet credibility self-evaluation function is established to increase the confidence of
recognition, safe and reliable recognition and detection works can be ensured, the system is
endowed with discrimination ability, and the intelligence level is higher.
3. In the method, the system, the medium or the electronic device described in the present
disclosure, for terminal pose correction, the pallet can be used as a reference to correct a cumulative
error during the transportation path process of the mobile robot, the cut-in pose is automatically
adjusted without human intervention, and a corresponding response can be made to the randomly
placed pallet for adjustment, thereby reducing the strictness of workers on the pallet placement on a
production line and improving the fault tolerance of the mobile robot.
Brief Description of the drawings
The drawings constituting a part of the present disclosure are used for providing a further
understanding of the present disclosure. The exemplary embodiments of the present disclosure and
the descriptions thereof are used for explaining the present disclosure, but do not constitute
improper limitations of the present disclosure.
Fig. 1 is a schematic diagram of composition of a mobile robot pose correction system for
Description
recognizing dynamic pallet terminals provided by embodiment 1 of the present disclosure.
Fig. 2 is a total flow diagram of a working method of the mobile robot pose correction system for
recognizing dynamic pallet terminals provided by embodiment 1 of the present disclosure.
Fig. 3 is a flow diagram of the working method of a storage pallet terminal recognition and
detection module in the mobile robot pose correction system for recognizing dynamic pallet
terminals provided by embodiment 1 of the present disclosure.
Fig. 4 is a flow diagram of the working method of a pallet credibility self-evaluation module in the
mobile robot pose correction system for recognizing dynamic pallet terminals provided by
embodiment 1 of the present disclosure.
Fig. 5 is a flow diagram of the working method of a mobile robot pose correction module in the
mobile robot pose correction system for recognizing dynamic pallet terminals provided by
embodiment 1 of the present disclosure.
Fig. 6 is a schematic structural diagram of a storage pallet used by a backpack mobile robot
provided by embodiment 1 of the present disclosure.
Fig. 7 is a planning map of a laser radar point cloud layout when the mobile robot provided by
embodiment 1 of the present disclosure recognizes the storage pallet.
Fig. 8 is a laser radar scanning point cloud diagram of the storage pallet provided by embodiment 1
of the present disclosure in a hall environment.
Fig. 9 is a schematic diagram of pose correction work after the mobile robot provided by
embodiment 1 of the present disclosure recognizes a dynamic pallet terminal.
Detailed Description of the Embodiments
The present disclosure will be further described below in conjunction with the drawings and
embodiments.
It should be pointed out that the following detailed descriptions are all illustrative, and are intended
to provide further descriptions of the present disclosure. Unless otherwise indicated, all technical
and scientific terms used herein have the same meaning as commonly understood by those of
ordinary skill in the technical field to which the present disclosure belongs.
It should be noted that the terms used here are only for describing specific embodiments, and are
not intended to limit the exemplary embodiments according to the present disclosure. As used
Description
herein, unless the context clearly indicates otherwise, the singular form is also intended to include
the plural form. In addition, it should also be understood that when the terms "comprising" and/or
"including" is used in this specification, it indicates the presence of features, steps, operations,
devices, components, and/or combinations thereof.
In the case of no conflict, the embodiments in the present disclosure and the features in the
embodiments can be combined with each other.
Embodiment 1:
Embodiment 1 of the present disclosure provides a mobile robot pose correction system for
recognizing dynamic pallet terminals. As shown in Fig. 1, the system includes a storage pallet
terminal recognition and detection module, a pallet credibility self-evaluation module and a mobile
robot pose correction module; and
the storage pallet terminal recognition and detection module mainly completes such works as
eliminating and screening key frame laser point clouds, classifying feature points, performing
machine learning and unsupervised learning analysis, and performing window search on a possible
range of the pallet; the pallet credibility self-evaluation module mainly completes such works as
performing online supervision on a pallet leg point cloud self-evaluation function, performing
reflector detection on the basis of a feature tag, and outputting a score queue of credibility priority
updating; and the mobile robot pose correction module constructs a virtual cap point, so that a
mobile robot adaptively adjusts the operation pose.
Fig. 2 shows a working method of the mobile robot pose correction system for recognizing dynamic
pallet terminals. The mobile robot will autonomously plan to move according to a path given by a
scheduling system, when the robot reaches a specified station and needs to perform a terminal
operation, the mobile robot stops the operation and sends a recognized approximate pallet point
cloud cluster queue of a window range to the pallet credibility self-evaluation module through the
storage pallet terminal recognition and detection module, calculates the score of the obtained point
cloud cluster, selects the point cloud information of a group of pallet legs with the highest score to
the mobile robot pose correction module, so as to flexibly complete the pose correction of the robot
in real time, thereby ensuring the efficiency and safety of the operation of the robot.
For the storage pallet terminal recognition and detection module, the module is composed of a laser
radar and industrial cameras on a mobile robot substrate, wherein the laser radar is installed at the
Description
middle position of the front end of the mobile robot with the maximum scanning range and outputs
a point cloud with position information, the two industrial cameras are installed in opposite
directions, the industrial camera at the top end completes the detection of two-dimensional code
information of the terminal pallet, the industrial camera at the bottom end completes the detection
of precise path position information, and the storage pallet terminal recognition and detection
module performs data processing on the point cloud within the recognition range of the laser radar
to obtain a point cloud cluster that conforms to the feature information of the pallet, and confirms
the terminal pallet information through the industrial camera, so as to complete the feedback loop of
the work scheduling system of the mobile robot.
The working method of the storage pallet terminal recognition and detection module , as shown in
Fig. 3, includes the following steps:
step Al: when the mobile robot runs to a specified lifting operation station on the specified path of
the scheduling system, the robot stops moving, and stores a frame of scanning point cloud data of
the laser radar at the same time to serve as original data for completing the subsequent processing.
Fig. 5 shows the pallet used by a knapsack mobile robot, the bottom end of the pallet is supported
by four legs, the information of the pallet in the original data of the scanning point cloud of the
mobile robot is represented by four leg point clouds, 1 in the figure represents an information
two-dimensional code of the pallet, which is used for feeding back the transportation information to
the scheduling system and feeding back that the lifting operation is running as scheduled, and 2 in
the figure represents a reflector with high reflectance, which is used as a feature tag of the pallet to
enhance the recognizability of the pallet.
Step A2: screening and eliminating the original point cloud data.
There are empty points in the original data (the x and y coordinates output by points that are not
detected are all 0), and the empty points are eliminated to reduce the subsequent calculation amount;
and the density of laser point clouds has the nature of becoming sparse as the distance increases, so
a rectangular range of a middle distance in front of the laser radar is limited as a pallet placement
area.
Fig. 7 shows a schematic diagram of a screening and eliminating range of the laser point clouds
when the mobile robot is at a recognition position, wherein the area 1 represents a pallet parking
range with a length of 28.28m and a width of 14.14m, this range is consistent with the pallet
Description
parking range in the factory, which ensures the accuracy of subsequent pallet recognition, the area 2
represents a laser point cloud removal part, since there is an inverse proportional function for the
density and distance of the laser point clouds, it is difficult to recognize the pallet features of too far
points, which is likely to cause safety problems, the area 3 represents a laser point cloud removal
part, and the pallet is often stored in front of the robot body, so the point clouds in this area are
eliminated.
The point clouds existing in this rectangular range area 2 in the original data are screened out as the
input information for recognition and detection, the remaining unscreened data is eliminated, and
the order of the original point cloud data remains unchanged during the process.
Step A3: processing the input data to complete feature point classification.
The input data is used for calculating curvatures, means and variance values of corresponding point
cloud coordinates in the range of a neighborhood 3, and the calculation formulas are as follows:
cur nj+3
n 3 .
n-j+3
i-j-3
n Ii-j-3 yvar = -- 2
wherein, cur , x, , Y and Y respectively represent the curvatures, x coordinate means, y
coordinate means, x coordinate variances and y coordinate variances of the point clouds, and the
subscript j represents the serial number of the current point cloud. By establishing a curvature
threshold 0.01m, the point clouds are classified as points with line features and corner features, the
point clouds are traversed to give classification information and eliminate the line feature points, a
distribution variance threshold and a curvature threshold are established for the screened out corner
feature points, and the corner feature points are divided into breakpoints and corner points again.
Fig. 8 shows a laser radar scanning point cloud map of the storage pallet in a hall environment, the
Description
area 1 in the figure represents the screened out linear features, and the main characterization points
of the pallet are breakpoints and corner points, so the linear features are eliminated, 2 represents the
position of the laser radar in the laser point cloud, 4 in the figure represents a corner point, the
corner points are mostly straight turning points of obstacles, such as a wall corner, 5 in the figure
represents a breakpoint, that is, a discontinuous point, which is mainly represented as that the laser
point cloud is reflected from one object to another object.
Step A4: performing voxel filtering on breakpoint feature points.
The breakpoints at discontinuities in the point cloud are repeated because of similar curvatures, the
point clouds belonging to the same breakpoint are filtered, and the breakpoint with the minimum
serial number within the range of the discontinuous point cloud is selected as the breakpoint
characterization.
Step A5: performing windowed pallet feature filtering on the basis of the breakpoints.
For the breakpoints obtained by the above filtering processing, whether there are at least two
breakpoints in a 4x4m2 rectangular range is sequentially selected and accessed, breakpoint linked
lists meeting the requirements are stored in a queue, and the unsatisfied points are eliminated. By
means of this step, structure body edge breakpoints and obstacle edge breakpoints can be effectively
distinguished.
Step A6: setting a pallet search box and completing clustering search.
The breakpoint linked lists in the queue are taken out in sequence, the mean of all objects in the
breakpoint linked lists is calculated to serve as the central point of the pallet search box, so as to
construct the pallet search box to specify the approximate range of the pallet; and KD-TREE is
established for all point clouds in the search box, so that the progress of the DBSCAN clustering
operation can be accelerated, the point clouds are divided into linear feature points, pallet leg
feature points and invalid points again in combination with the curvature threshold in step A3, the
serial number 3 in Fig. 8 shows this type of point clouds, that is, the pallet legs, and the number of
the point clouds is mostly greater than two, the pallet leg feature mean characterization points in all
search boxes are stored, and it is ensured that the number of pallet leg characterization points in
each group is at least 2.
The pallet credibility self-evaluation module constructs a pallet feature self-evaluation function, and
performs score estimation processing on the pallet leg feature mean characterization points output
Description
by the storage pallet terminal recognition and detection module. This module no longer builds a
distance template when the traditional laser radar recognizes the pallet, but calculates and sorts
pallet leg characterization point linked lists that meet the requirements in all the search boxes output
by the above module, and uses the group of point clouds with the highest score as pallet leg feature
point coordinates. In order to ensure the stability and safety of module recognition, the highest score
feature points of 10 frames of point clouds are checked and compared, and an optimal solution is
obtained according to the Gaussian distribution. This method has strong adaptability and high
degree of flexibility.
The working method of the pallet credibility self-evaluation module, as shown in Fig. 4, includes
the following steps:
Step B1: calculating a pallet leg point cloud distribution score.
A circular range is constructed with the pallet leg characterization point obtained by clustering as
the central point, a classification score is calculated, and the calculation formula is as follows:
count dis score = range clust sum
wherein, dis _ score represents the point cloud distribution score of a pallet leg characterization
point, range_ count represents the number of the same type of clustered point clouds falling into
the circular range, clust-sum represents the number of the same type of clustered point clouds,
and the higher the score is, the greater the possibility of deeming this type of point clouds as the
pallet legs is.
Step B2: calculating a pallet leg point cloud shape score.
All the pallet leg feature mean characterization points are selected and calculated, three points of the
point cloud are circularly selected, the slope values of the connecting lines of the three
characterization points are calculated, and the calculation formula is as follows:
k x-x .
wherein, k , x and Y respectively represent the slope of the connecting line, the abscissa of the
characterization point and the vertical coordinate of the characterization point, after the slope of the
connecting line is calculated, the degree of a right angle formed between the connecting lines is
Description
calculated, and the calculation formula is as follows:
(k - k, ) 180 K angle =abs arctan y(1+ k~x k,)) xkk, 1 pi
wherein, angle represents the included angle between the connecting lines, the pallet leg point
cloud shape score is calculated through the included angles among the line segments of the three
points, and the calculation formula is as follows:
ranglel, ane9g0 angle.e (0,90),i = min(relative angle) shape-score=agi-9 angle, -9 else
wherein, shape _ score represents the pallet leg point cloud shape score, i represents a connecting
line angle index value with the minimum relative angle value, relative_ angle represents the
relative angle value of the included angle between the connecting lines, and the score can be used
for representing the degree of approximation of a quadrangle formed by the point cloud
characterization points, and the higher the score is, the more square the quadrangle is.
Step B3: calculating the total score of the pallet characterization point linked lists in the queue,
multiplying the pallet leg point cloud shape score with all the pallet leg point cloud distribution
scores that form the quadrilateral shape of the pallet, so as to obtain the total score. The higher the
total score is, the greater the possibility that the characterization point queue is deemed as the pallet
feature point cloud is.
Step B4: querying the features of a reflector and updating the priority. Whether there is data
meeting the feature information of the reflector is queried from point cloud reflectance data in a
detection frame, if so, and a high reflectance characterization point is located at the front end of a
total score queue, the priority of the characterization point linked list with the information of the
reflector is updated, and the first pallet characterization point linked list in the total score queue is
output after the adjustment is completed.
For the mobile robot pose correction module, the mobile robot corrects the cut-in position of the
mobile robot according to the recognized position of the pallet, so as to ensure safe and reliable
lifting operation of the mobile robot. The mobile robot pose correction module, as shown in Fig. 5,
includes the following steps:
Description
Step Cl: calculating a virtual cut-in cap point and a virtual termination cap point of the mobile
robot. The coordinate points output by the pallet credibility self-evaluation module are sorted from
small to large according to the headstock orientation of the mobile robot, a virtual cut-in cap point
and a virtual termination cap point of the mobile robot are calculated, and the calculation formulas
are as follows:
entry cap= X1 2 0(y
) entry angle = arctan_ X1 iX 2
diagonal y end cap= (diagonal _x - 2 2
wherein, entry -cap represents the virtual cut-in cap point, which is used for adjusting the position
oftherobot, entryangle represents a virtual cut-in angle, which is used for adjusting the pose of
therobot, end cap represents a virtual pallet terminal parking point of the mobile robot, and
diagonalx and diagonal-y represent diagonal coordinate element values.
Step C2: adjusting the pose of the mobile robot and performing operations. Fig. 9 shows a
schematic diagram of the actual work of the mobile robot, the serial number 1 in the figure is
end_ cap, which is calculated from the pallet leg point cloud coordinates and is used for the
accurate parking of the mobile robot to ensure high-precision operations, the serial number 2 in the
figure is the position where the mobile robot detects and recognizes the pallet, the position contains
two-dimensional code information, which records the actions to be executed by the mobile robot,
the serial number 3 in the figure represents the cap point on the normal working path of the mobile
robot, which is used by the robot to record key positions and local navigation, the serial number 4 in
the figure is a parking area of the pallet, the serial number 5 in the figure is entry cap , the dotted
line in the figure represents the specified path of the scheduling system, and the thin-dotted line
represents a virtual path when the mobile robot lifts the pallet. An upper computer of the mobile
robot controls a lower computer to adjust the pose to entry_ cap, and constructs the virtual path to
cut into the lower side of the pallet, when the position reaches the point end cap , the industrial
Description
camera at the top end of the mobile robot recognizes a two-dimensional code at the bottom of the
pallet, feeds it back to the scheduling system of the robot, and returns to the normal operation path
of the mobile robot according to the cut-in path after the robot works normally, the two types of
virtualcappoints,thatis,entrycap andendcap, are deleted, so as to realize the flexible and
intelligent terminal pallet jacking operation of the mobile robot without human intervention, which
improves the fault tolerance and robustness of the terminal operations of the entire mobile robot.
Embodiment 2:
Embodiment 2 of the present disclosure provides a mobile robot pose correction method for
recognizing dynamic pallet terminals, including the following processes:
obtaining the laser point cloud cluster data of a pallet, and processing the obtained laser point cloud
cluster data to obtain pallet leg feature mean characterization points;
obtaining a point cloud distribution score and a point cloud shape score according to all the pallet
leg feature mean characterization points, and obtaining pallet leg feature point coordinates
according to a product of the two scores; and
performing pose correction on a mobile robot according to the obtained pallet leg feature point
coordinates.
SI: the obtaining the laser point cloud cluster data of the pallet, and processing the obtained laser
point cloud cluster data to obtain pallet leg feature mean characterization points includes the
following processes:
step Al: when the mobile robot runs to a specified lifting operation station on the specified path of
the scheduling system, the robot stops moving, and stores a frame of scanning point cloud data of
the laser radar at the same time to serve as original data for completing the subsequent processing.
Step A2: screening and eliminating the original point cloud data.
There are empty points in the original data ( the x and y coordinates output by points that are not
detected are all 0), and the empty points are eliminated to reduce the subsequent calculation amount;
and the density of laser point clouds has the nature of becoming sparse as the distance increases, so
a rectangular range of a middle distance in front of the laser radar is limited as a pallet placement
area, the point clouds existing in this rectangular range area in the original data are screened out as
the input information for recognition and detection, the remaining unscreened data is eliminated,
and the order of the original point cloud data remains unchanged during the process.
Description
Step A3: processing the input data to complete feature point classification.
The input data is used for calculating curvatures, means and variance values of corresponding point
cloud coordinates in the range of a neighborhood 3, and the calculation formulas are as follows:
cur nj+3
xx n 3
. n-j+3
ni-j-3 1=n-3
xvar= x n-i-j3
I n--3 yvar = -- 2 n I j-3
wherein, cur, x, y, Y and Y respectively represent the curvatures, x coordinate means, y
coordinate means, x coordinate variances and y coordinate variances of the point clouds, and the
subscript j represents the serial number of the current point cloud. By establishing a curvature
threshold, the point clouds are classified as points with line features and corner features, the point
clouds are traversed to give classification information and eliminate the line feature points, a
distribution variance threshold and a curvature threshold are established for the screened out corner
feature points, and the corner feature points are divided into breakpoints and corner points again.
Step A4: performing voxel filtering on breakpoint feature points.
The breakpoints at discontinuities in the point cloud are repeated because of similar curvatures, the
point clouds belonging to the same breakpoint are filtered, and the breakpoint with the minimum
serial number within the range of the discontinuous point cloud is selected as the breakpoint
characterization.
Step A5: performing windowed pallet feature filtering on the basis of the breakpoints.
For the breakpoints obtained by the above filtering processing, whether there are at least two
breakpoints in the rectangular range is sequentially selected and accessed, breakpoint linked lists
meeting the requirements are stored in a queue, and the unsatisfied points are eliminated. By means
of this step, structure body edge breakpoints and obstacle edge breakpoints can be effectively
Description
distinguished.
Step A6: setting a pallet search box and completing clustering search.
The breakpoint linked lists in the queue are taken out in sequence, the mean of all objects in the
breakpoint linked lists is calculated to serve as the central point of the pallet search box, so as to
construct the pallet search box to specify the approximate range of the pallet; and a DBSCAN
clustering operation is performed on all the point clouds in the search box, the point clouds are
divided into linear feature points, pallet leg feature points and invalid points again in combination
with the curvature threshold in step A3, the pallet leg feature mean characterization points in all
search boxes are stored, and it is ensured that the number of pallet leg characterization points in
each group is at least 2.
S2: the obtaining the point cloud distribution score and the point cloud shape score according to all
the pallet leg feature mean characterization points, and obtaining the pallet leg feature point
coordinates according to the product of the two scores includes the following processes:
Step B1: calculating the pallet leg point cloud distribution score.
A circular range is constructed with the pallet leg characterization point obtained by clustering as
the central point, a classification score is calculated, and the calculation formula is as follows:
count dis score = range - clust sum
wherein, dis _ score represents the point cloud distribution score of a pallet leg characterization
point, range_ count represents the number of the same type of clustered point clouds falling into
the circular range, clust-sum represents the number of the same type of clustered point clouds,
and the higher the score is, the greater the possibility of deeming this type of point clouds as the
pallet legs is.
Step B2: calculating the pallet leg point cloud shape score.
All the pallet leg feature mean characterization points are selected and calculated, three points of the
point cloud are circularly selected, the slope values of the connecting lines of the three
characterization points are calculated, and the calculation formula is as follows:
k x,-x.
Description
wherein, k , x and Y respectively represent the slope of the connecting line, the abscissa of the
characterization point and the vertical coordinate of the characterization point, after the slope of the
connecting line is calculated, the degree of a right angle formed between the connecting lines is
calculated, and the calculation formula is as follows:
(k - k, ) 180 angle =abs arctan (kk -1 K y(1+ k~x k,) x pi
wherein, angle represents the included angle between the connecting lines, the pallet leg point
cloud shape score is calculated through the included angles among the line segments of the three
points, and the calculation formula is as follows:
angle an_g0 angle.e (0,90),i = min(relative angle) shape-score=agl-9 - angle, -9 else
wherein, shape _ score represents the pallet leg point cloud shape score, i represents a connecting
line angle index value with the minimum relative angle value, relative_ angle represents the
relative angle value of the included angle between the connecting lines, and the score can be used
for representing the degree of approximation of a quadrangle formed by the point cloud
characterization points, and the higher the score is, the more square the quadrangle is.
Step B3: calculating the total score of the pallet characterization point linked lists in the queue.
The pallet leg point cloud shape score is multiplied with all the pallet leg point cloud distribution
scores that form the quadrilateral shape of the pallet, so as to obtain the total score. The higher the
total score is, the greater the possibility that the characterization point queue is deemed as the pallet
feature point cloud is.
Step B4: querying the features of a reflector and updating the priority.
Whether there is data meeting the feature information of the reflector is queried from point cloud
reflectance data in a detection frame, if so, and a high reflectance characterization point is located at
the front end of a total score queue, the priority of the characterization point linked list with the
information of the reflector is updated, and the first pallet characterization point linked list in the
total score queue is output after the adjustment is completed.
S3: the performing pose correction on the mobile robot according to the obtained pallet leg feature
Description
point coordinates includes the following processes:
Step Cl: calculating a virtual cut-in cap point and a virtual termination cap point of the mobile
robot. The coordinate points output by the pallet credibility self-evaluation module are sorted from
small to large according to the headstock orientation of the mobile robot, a virtual cut-in cap point
and a virtual termination cap point of the mobile robot are calculated, and the calculation formulas
are as follows:
entry cap= X1 2 0(y
) entry angle = arctan x1 x2 Y1 -Y 2
. _x diagonal y end cap=(diagonal ,I - 2 2
wherein, entry -cap represents the virtual cut-in cap point, which is used for adjusting the position
oftherobot, entryangle represents a virtual cut-in angle, which is used for adjusting the pose of
therobot, end cap represents a virtual pallet terminal parking point of the mobile robot, and
diagonalx and diagonal-y represent diagonal coordinate element values.
Step C2: adjusting the pose of the mobile robot and performing operations.
An upper computer of the mobile robot controls a lower computer to adjust the pose toentrycap
and constructs a virtual path to cut into the lower side of the pallet, when the position reaches the
point endcap, the industrial camera at the top end of the mobile robot recognizes a
two-dimensional code at the bottom of the pallet, feeds it back to the scheduling system of the robot,
and returns to the normal operation path of the mobile robot according to the cut-in path after the
robot works normally, and the two types of virtual cap points, that is, entrycap and
end_ cap ,are deleted.
Embodiment 3:
Embodiment 3 of the present disclosure provides a computer readable storage medium on which a
program is stored, and when executed by a processor, the program implements the steps in the
Description
mobile robot pose correction method for recognizing dynamic pallet terminals in embodiment 2 of
the present disclosure.
Embodiment 4:
Embodiment 4 of the present disclosure provides an electronic device, including a memory, a
processor, and a program stored in the memory and capable of running on the processor, wherein
when the processor executes the program, the program implements the steps in the mobile robot
pose correction method for recognizing dynamic pallet terminals in embodiment 2 of the present
disclosure.
Those skilled in the art should understand that the embodiment of the present disclosure can be
provided as a method, a system or a computer program product. Accordingly, the present disclosure
can adopt the form of a hardware embodiment, a software embodiment, or an embodiment
combining software with hardware. Moreover, the present disclosure can adopt the form of a
computer program product which is implemented on one or more computer usable storage media
(including, but not limited to, a magnetic disk memory and an optical memory and the like)
containing computer usable program codes.
The present disclosure is described in accordance with the flow diagram and/or block diagram of
the method, the device (system) and the computer program product in the embodiments of the
present disclosure. It should be understood that computer program instructions can realize each
flow and/or block in the flow diagram and/or the block diagram and the combination of the flows
and/or blocks in the flow diagram and/or the block diagram. These computer program instructions
can be provided to a general-purpose computer, a special-purpose computer, an embedded
processor or processors of other programmable data processing devices to generate a machine, such
that the instructions executed by the computers or the processors of the other programmable data
processing devices generate apparatuses used for achieving specified functions in one or more flows
of the flow diagram and/or one or more blocks of the block diagram.
These computer program instructions can also be stored in a computer readable memory that is
capable of guiding the computers or the other programmable data processing devices to work in
particular manners, such that the instructions stored in the computer readable memory generate
products including instruction apparatuses, and the instruction apparatuses achieve the specified
functions in one or more flows of the flow diagram and/or one or more blocks of the block diagram.
Description
These computer program instructions can also be loaded on the computers or the other
programmable data processing devices, so as to execute a series of operation steps on the computers
or the other programmable data processing devices to produce processing achieved by the
computers, such that the instructions executed on the computers or the other programmable data
processing devices provide steps used for achieving the specified functions in one or more flows of
the flow diagram and/or one or more blocks of the block diagram.
Those of ordinary skill in the art can understand that all or a part of flows in the above-mentioned
method embodiment can be implemented by a computer program instructing corresponding
hardware, the program can be stored in a computer readable storage medium, and when being
executed, the program can include the flows of the above-mentioned method embodiments. The
storage medium can be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory,
ROM), a random access memory (Random Access Memory, RAM), etc.
The above descriptions are only preferred embodiments of the present disclosure and are not
intended to limit the present disclosure. For those skilled in the art, the present disclosure can have
various modifications and changes. Any modifications, equivalent replacements, improvements and
the like, made within the spirit and principle of the present disclosure, shall be included in the
protection scope of the present disclosure.

Claims (10)

Claims
1. A mobile robot pose correction method for recognizing dynamic pallet terminals, comprising the
following processes:
obtaining the laser point cloud cluster data of a pallet, and processing the obtained laser point cloud
cluster data to obtain pallet leg feature mean characterization points;
obtaining a point cloud distribution score and a point cloud shape score according to all the pallet
leg feature mean characterization points, and obtaining pallet leg feature point coordinates
according to a product of the two scores; and
performing pose correction on a mobile robot according to the obtained pallet leg feature point
coordinates.
2. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 1,
wherein:
the step of obtaining the laser point cloud cluster data of the pallet, and processing the obtained
laser point cloud cluster data comprises the following processes:
after the mobile robot stops moving, obtaining at least one frame of scanning point cloud data of a
laser radar to serve as original point cloud data;
eliminating and screening the original point cloud data;
performing feature point classification on the point cloud data after the eliminating and screening
processing, classifying point clouds into line feature points and corner feature points, eliminating
the line feature points, and dividing the corner feature points into breakpoints and corner points;
performing voxel filtering on the breakpoints;
performing windowed pallet feature filtering on the basis of the breakpoints; and
completing clustering search according to a preset pallet search box.
3. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 2,
wherein:
the step of eliminating and screening the original point cloud data comprises the following
processes:
eliminating empty points, limiting a rectangular range of a middle distance in front of the laser radar
as a pallet placement area, screening out the point clouds existing in this rectangular range in the
original data to serve as input information for recognition and detection, eliminating the remaining
unscreened data, and keeping the order of the original point cloud data unchanged during the
Claims
process; or, the step of performing feature point classification on the point cloud data after the eliminating and
screening processing comprises the following processes:
classifying the point clouds into points with line features and corner features by establishing a
curvature threshold, traversing the point clouds to give classification information and eliminate the
line feature points, establishing a distribution variance threshold and a curvature threshold for the
screened out corner feature points, and dividing the corner feature points into breakpoints and
corner points again;
or,
the step of performing voxel filtering on the breakpoints comprises the following processes:
performing filtering on the point clouds belonging to the same breakpoint, and selecting the
breakpoint with the minimum serial number within a discontinuous point cloud range as breakpoint
characterization;
or,
the step of performing windowed pallet feature filtering on the basis of the breakpoints comprises
the following processes:
for the breakpoints obtained by filtering, sequentially selecting and accessing whether there are at
least two breakpoints within a rectangular range, storing breakpoint linked lists that meet the
requirements in a queue, and eliminating the unsatisfied points;
or,
the step of completing clustering search according to the preset pallet search box comprises the
following steps:
taking out the breakpoint linked lists in the queue in sequence, calculating a mean of all objects in
the breakpoint linked lists, and using the mean as the central point of the pallet search box, so as to
construct the pallet search box; and
performing a clustering operation on all point clouds in the search box, classifying the point clouds
into linear feature points, pallet leg feature points and invalid points again in combination with the
obtained curvature threshold, and storing the pallet leg feature mean characterization points in all
search boxes.
Claims
4. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 1,
wherein:
a circular range is constructed with the obtained pallet leg feature point coordinates as the central
point, and the point cloud distribution score is calculated, and is specifically the ratio of the number
of the same type of clustered point clouds falling in the circular range to the number of the same
type of clustered point clouds.
5. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 1,
wherein:
the acquisition of the point cloud shape score comprises:
performing selection and calculation on all the pallet leg feature mean characterization points,
circularly selecting three points in the point clouds, and calculating slope values of connecting lines
of the three characterization points; and
calculating the degree of a right angle between the connecting lines according to the obtained slopes
of the connecting lines, and calculating the pallet leg point cloud shape score through the included
angles among the connecting lines of the three points.
6. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 1,
wherein:
after the two scores are obtained, the pallet leg point cloud shape score is multiplied by all the pallet
leg point cloud distribution scores that constitute a quadrilateral shape of the pallet to obtain a total
score, and the group of point clouds with the highest score is used as the pallet leg feature point
coordinates;
or,
whether there is data meeting the feature information of a reflector is queried from point cloud
reflectance data in a detection frame, if so, and a high reflectance characterization point is located at
the front end of a total score queue, the priority of the characterization point linked list with the
information of the reflector is updated, and the first pallet characterization point linked list in the
total score queue is output after the adjustment is completed.
7. The mobile robot pose correction method for recognizing dynamic pallet terminals of claim 1,
wherein:
the step of performing pose correction on the mobile robot according to the obtained pallet leg
Claims
feature point coordinates comprises the following processes:
according to the obtained pallet leg feature point coordinates, sorting the headstock orientation of
the mobile robot from small to large, and calculating a virtual cut-in cap point and a virtual
termination cap point of the mobile robot;
adjusting the posture of the mobile robot to the virtual cut-in cap point, constructing a virtual path
to cut into the lower side of the pallet, and when the position reaches the virtual termination cap
point, controlling a top end camera module of the mobile robot to recognize an identification code
at the bottom of the pallet; and
after the mobile robot operates normally, returning to a normal operation path of the mobile robot
according to the cut-in path, and deleting the virtual cut-in cap point and the virtual termination cap
point.
8. A mobile robot pose correction system for recognizing dynamic pallet terminals, comprising:
a storage pallet terminal recognition and detection module, configured to: obtain the laser point
cloud cluster data of a pallet, and process the obtained laser point cloud cluster data to obtain pallet
leg feature mean characterization points;
a pallet credibility self-evaluation module, configured to obtain a point cloud distribution score and
a point cloud shape score according to all the pallet leg feature mean characterization points, and
obtain pallet leg feature point coordinates according to a product of the two scores; and
a mobile robot pose correction module, configured to perform pose correction on a mobile robot
according to the obtained pallet leg feature point coordinates.
9. A computer readable storage medium on which a program is stored, wherein when executed by a
processor, the program implements the steps in the mobile robot pose correction method for
recognizing dynamic pallet terminals of any one of claims 1-7.
10. An electronic device, comprising a memory, a processor, and a program stored in the memory
and capable of running on the processor, wherein when the processor executes the program, the
program implements the steps in the mobile robot pose correction method for recognizing dynamic
pallet terminals of any one of claims 1-7.
AU2021266204A 2021-03-01 2021-11-09 Mobile robot pose correction method and system for recognizing dynamic pallet terminals Active AU2021266204B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021102959751 2021-03-01
CN202110295975.1A CN112935703B (en) 2021-03-19 2021-03-19 Mobile robot pose correction method and system for identifying dynamic tray terminal

Publications (2)

Publication Number Publication Date
AU2021266204A1 true AU2021266204A1 (en) 2022-09-15
AU2021266204B2 AU2021266204B2 (en) 2023-06-15

Family

ID=76227084

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021266204A Active AU2021266204B2 (en) 2021-03-01 2021-11-09 Mobile robot pose correction method and system for recognizing dynamic pallet terminals

Country Status (2)

Country Link
CN (1) CN112935703B (en)
AU (1) AU2021266204B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516750B (en) * 2021-06-30 2022-09-27 同济大学 Three-dimensional point cloud map construction method and system, electronic equipment and storage medium
CN114055781B (en) * 2021-10-24 2023-12-29 扬州大学 Self-adaptive correction method for fuel tank welding mechanical arm based on point voxel correlation field
CN115159402A (en) * 2022-06-17 2022-10-11 杭州海康机器人技术有限公司 Goods putting and taking method and device, electronic equipment and machine readable storage medium
CN115600118B (en) * 2022-11-29 2023-08-08 山东亚历山大智能科技有限公司 Tray leg identification method and system based on two-dimensional laser point cloud

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10328578B2 (en) * 2017-04-21 2019-06-25 X Development Llc Methods and systems for detecting, recognizing, and localizing pallets
JP2020507137A (en) * 2017-12-11 2020-03-05 ベイジン ディディ インフィニティ テクノロジー アンド ディベロップメント カンパニー リミティッド System and method for identifying and positioning objects around a vehicle
CN110163906B (en) * 2019-05-22 2021-10-29 北京市商汤科技开发有限公司 Point cloud data processing method and device, electronic equipment and storage medium
EP3764273A1 (en) * 2019-07-08 2021-01-13 Fraunhofer Gesellschaft zur Förderung der Angewand System and method for identifying a pallet
US11389965B2 (en) * 2019-07-26 2022-07-19 Mujin, Inc. Post-detection refinement based on edges and multi-dimensional corners
CN111533051B (en) * 2020-05-08 2021-12-17 三一机器人科技有限公司 Tray pose detection method and device, forklift and freight system
CN112070759B (en) * 2020-09-16 2023-10-24 浙江光珀智能科技有限公司 Fork truck tray detection and positioning method and system

Also Published As

Publication number Publication date
CN112935703B (en) 2022-09-27
AU2021266204B2 (en) 2023-06-15
CN112935703A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
AU2021266204B2 (en) Mobile robot pose correction method and system for recognizing dynamic pallet terminals
CA3138243C (en) Tracking vehicles in a warehouse environment
US11638993B2 (en) Robotic system with enhanced scanning mechanism
US20230410319A1 (en) Method and computing system for object identification
AU2021288667B2 (en) Control method and apparatus for warehouse robot, and robot and warehouse system
CN113253737B (en) Shelf detection method and device, electronic equipment and storage medium
CN112070838A (en) Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
WO2023005384A1 (en) Repositioning method and device for mobile equipment
KR20180046361A (en) Method and System for loading optimization based on depth sensor
LU501953B1 (en) High-accuracy method for controlling grabbing position of grab with radar feedback
CN108828518A (en) A kind of boxcar inside carrier-and-stacker localization method
CN113050636A (en) Control method, system and device for autonomous tray picking of forklift
CN116443527B (en) Pallet fork method, device, equipment and medium based on laser radar
Li et al. Pallet detection and localization with RGB image and depth data using deep learning techniques
US20220284216A1 (en) Method and computing system for generating a safety volume list for object detection
CN115586552A (en) Method for accurately secondarily positioning unmanned truck collection under port tyre crane or bridge crane
CN114688992B (en) Method and device for identifying reflective object, electronic equipment and storage medium
CN116972831B (en) Dynamic scene mobile robot positioning method and system based on salient features
CN116385533A (en) Fork type AGV target pose detection method based on two-dimensional and three-dimensional imaging
CN114295118A (en) Multi-robot positioning method, device and equipment
Zhao et al. A Method for Autonomous Shelf Recognition and Docking of Mobile Robots Based on 2D LiDAR
CN117474892A (en) Goods shelf identification method, mobile robot and storage medium
Kirci et al. EuroPallet Detection with RGB-D Camera Based on Deep Learning
TW202411139A (en) Container storage method and robot
CN117115240A (en) Universal pallet 3D pose positioning method and system and storage medium

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)