CN112179330B - Pose determination method and device of mobile equipment - Google Patents

Pose determination method and device of mobile equipment Download PDF

Info

Publication number
CN112179330B
CN112179330B CN202010963179.6A CN202010963179A CN112179330B CN 112179330 B CN112179330 B CN 112179330B CN 202010963179 A CN202010963179 A CN 202010963179A CN 112179330 B CN112179330 B CN 112179330B
Authority
CN
China
Prior art keywords
image
pose
mobile equipment
probability grid
grid map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010963179.6A
Other languages
Chinese (zh)
Other versions
CN112179330A (en
Inventor
胡立志
林辉
柯建华
卢维
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202010963179.6A priority Critical patent/CN112179330B/en
Priority to EP20944546.9A priority patent/EP4153940A4/en
Priority to KR1020237003646A priority patent/KR20230029981A/en
Priority to PCT/CN2020/141652 priority patent/WO2022007367A1/en
Publication of CN112179330A publication Critical patent/CN112179330A/en
Application granted granted Critical
Publication of CN112179330B publication Critical patent/CN112179330B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a pose determination method and a pose determination device of mobile equipment, wherein the pose determination method comprises the following steps: under the condition that the mobile equipment is determined to be lost in positioning, acquiring a first image shot by the mobile equipment; searching a second image with the similarity of the second image and the first image being greater than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library; rotating and/or translating the first pose to obtain N first candidate poses; and determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. According to the invention, the problem of large positioning error of the robot position and posture is solved, and the effect of improving the positioning accuracy of the robot position and posture is achieved.

Description

Pose determination method and device of mobile equipment
Technical Field
The invention relates to the field of communication, in particular to a pose determination method and device of mobile equipment.
Background
When a mobile robot performs a task operation, such as object handling in a warehouse, the mobile robot needs to accurately know the pose of the mobile robot in the current environment in the environment, so that the mobile robot can accurately perform the task. When the pose of the robot is lost, namely the pose of the robot in the current environment is not known due to the change of the environment, the robot needs to quickly restore the current pose of the robot.
In the prior art, visual repositioning is generally performed according to the vision of a robot, and a positioning result of the robot is output. However, based on the visual repositioning, the estimated initial pose error of the robot is large, and particularly in terms of angles, once a certain deviation exists in the angles, the positioning loss will be caused subsequently.
Therefore, no effective solution exists at present for the problem of large robot pose positioning error in the related art.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining the pose of a mobile device, which at least solve the problem of large robot pose positioning error in the related art.
According to an embodiment of the present invention, there is provided a pose determination method of a mobile device, including: under the condition that the mobile equipment is determined to be lost in positioning, acquiring a first image shot by the mobile equipment; searching a second image with the similarity greater than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose; rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose; and determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
Optionally, the determining the current pose of the mobile device according to the laser positioning scores of the N first candidate poses comprises: determining the current pose of the mobile equipment by using a branch and bound method and through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
Optionally, the down-sampling the original probability grid map to obtain the set of probability grid maps with different resolutions includes: performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map; and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
Optionally, the method further comprises: acquiring laser point cloud of the mobile equipment at the current pose; determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map; and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment in the current pose.
Optionally, before the searching, in a pre-established image library, for a second image whose similarity to the first image is greater than or equal to a first preset threshold, the method further includes: when the mobile equipment determines that the moving distance or the rotating angle is greater than or equal to a first preset threshold value in the moving process of the original probability grid map, the pose of the mobile equipment is a key pose; establishing an association relation between a key frame acquired by the mobile equipment at the key pose and the corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
Optionally, the method further comprises: acquiring a third image shot by the mobile equipment at the current pose; determining the matching rate of a third image and the second image; and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
Optionally, the method further comprises: if the second image is not found in the pre-established image library, controlling the mobile equipment to move for a preset distance or rotate for a second preset angle; acquiring a fourth image shot by the mobile equipment; searching a fifth image with the similarity greater than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose; rotating the second poses according to the first preset angle difference to obtain N second candidate poses, wherein N is an integer larger than or equal to 1, and the N second candidate poses comprise the second poses; determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
According to another embodiment of the present invention, there is provided a pose determination apparatus of a mobile device, including: the mobile equipment comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a first image shot by the mobile equipment under the condition that the mobile equipment is determined to be lost in positioning; the searching module is used for searching a second image with the similarity degree of the second image being greater than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose; the rotating module is used for rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose; and the determining module is used for determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, under the condition that the mobile equipment is determined to be lost in positioning, a first image shot by the mobile equipment is acquired; searching a second image with the similarity greater than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library; rotating the first poses according to the first preset angle difference to obtain N first candidate poses; and determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. Therefore, the purpose of determining the current pose of the mobile equipment through the laser positioning score is achieved. And then solved the great problem of robot position appearance positioning error, reached the effect that improves robot position appearance location rate of accuracy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a pose determination method of a mobile device according to an embodiment of the present invention;
fig. 2 is a flowchart of a pose determination method of a mobile device according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a mobile machine configuration according to an alternative embodiment of the present invention;
FIG. 4 is a schematic diagram of an original probability grid map in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic diagram of a branch-and-bound approach according to an alternative embodiment of the present invention;
FIG. 6 is a schematic diagram of a probability map obtained by sliding a window left and right from an original probability grid map according to an alternative embodiment of the present invention;
FIG. 7 is a schematic diagram of a probability map obtained by sliding a window up and down an original probability grid map in accordance with an alternative embodiment of the present invention;
FIG. 8 is a flow chart of vision and laser repositioning according to an alternative embodiment of the present invention;
fig. 9 is a block diagram of the configuration of the pose determination apparatus of the mobile device according to the embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the drawings and embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a pose determination method for a mobile device according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the pose determination method of the mobile device in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-described method. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
As an alternative embodiment, the following explains terms referred to in the present application:
and (4) milemeter: the method comprises the steps of preliminarily estimating the angle and distance variation of the mobile robot, namely estimating the pose of the robot at the current moment according to the pose variation at the previous moment and the pose at the previous moment by using a common wheeled encoder;
two-dimensional laser radar: the sensor is used for acquiring two-dimensional plane information and detecting two-dimensional plane contour information of the surrounding environment;
in this embodiment, a pose determination method for a mobile device running on the mobile terminal is provided, and fig. 2 is a flowchart of the pose determination method for a mobile device according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, under the condition that the mobile equipment is determined to be lost in positioning, a first image shot by the mobile equipment is obtained;
step S204, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
step S206, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and step S208, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
Through the steps, under the condition that the mobile equipment is determined to be lost in positioning, a first image obtained by shooting of the mobile equipment is obtained; searching a second image with the similarity of the second image and the first image being greater than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library; rotating the first poses according to the first preset angle difference to obtain N first candidate poses; and determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. Therefore, the purpose of determining the current pose of the mobile equipment through the laser positioning score is achieved. And then solved the great problem of robot position appearance positioning error, reached the effect that improves robot position appearance location rate of accuracy.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
As an alternative implementation, the mobile device may be a mobile robot, and as shown in fig. 3, the mobile robot in this application includes a mobile chassis, a 2D lidar sensor, and a camera, and a motion controller, a motor, a battery, an embedded computer, a odometer, and the like are disposed in the mobile chassis.
As an optional implementation manner, an image library established in advance stores images shot by the mobile robot at each pose in advance, in this embodiment, when it is determined that the mobile robot is lost in positioning, a first image is obtained through shooting by a camera of the robot, a second image with a similarity greater than or equal to a first preset threshold is searched in the image library through matching of feature points and descriptors, the first pose associated with the second image is used as an initial value, and then the current pose of the robot is further determined on the basis of the pose by a laser repositioning method.
As an optional implementation manner, when the robot is lost in positioning, according to a current image frame (first image) collected by the robot, extracting feature points of the image frame and calculating a descriptor, and then detecting whether there is an image frame with a very high similarity in a pre-established image library through a loop detection algorithm DBOW3, a first preset threshold may be set, and the image frame with the similarity greater than or equal to the first preset threshold may be used as a second image similar to the first image. And taking the first pose of the second image as a preliminary estimation result of the current pose of the robot.
As an alternative embodiment, feature point matching calculation may be performed by a descriptor using feature points on the second image and feature points of the first image. And searching the characteristic points on the second image within a certain range of the characteristic points of the first image, thereby accelerating the characteristic point matching calculation. And removing external points of the matched feature points in the first image and the second image by using epipolar geometric constraint, and solving a first pose of a preliminary estimation result of the pose of the first image by using a PnP algorithm by using a 3D point corresponding to the matched point on the second image and a 2D point of the first image.
Wherein, the epipolar geometric constraint formula is as follows:
x 0 Ex 1 =0
wherein x is 0 Representing normalized coordinates of feature points on the first image match; x is the number of 1 Representing normalized coordinates of feature points on the second image match; e denotes an essential matrix.
Optionally, the determining the current pose of the mobile device according to the laser positioning scores of the N first candidate poses comprises: determining the current pose of the mobile equipment by using a branch and bound method and through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
As an alternative embodiment, a priori map created by laser Simultaneous Localization and Mapping (SLAM) is saved as a probability grid map with different resolutions, and the value identified on each grid in the probability grid map may be a probability value, which may be used to indicate the probability that the current grid is occupied by the obstacle, and the score of the grid is higher the closer to the probability grid. In this embodiment, as shown in fig. 4, a schematic diagram of an original probability grid map according to an alternative embodiment of the present invention is shown, where each box in the map is used to represent a grid, and a value, such as 0.4, 0.7, and the like, identified on the grid is used to represent a probability that the grid is occupied by an obstacle, and the probability value may also be represented as a score of a laser point cloud falling on the grid. In this embodiment, the pose score calculation refers to projecting the laser point cloud into the probability grid map based on the current pose, and if the laser falls on the corresponding probability grid, taking the corresponding value of the corresponding grid as the score of the current point cloud, and finally, obtaining the average score of all point clouds in the point cloud data as the score of the current pose.
As an alternative embodiment, a search range with a certain size may be automatically added on the basis of the position of the first pose preliminarily estimated, for example, a search box with a predetermined size, such as a search box of 2mx2m, may be generated by adding predetermined ranges respectively up, down, left, right, and the like with the position of the first pose as a center position. A series of rotation candidate poses is generated based on the first pose and a first preset angle difference, wherein the first preset angle difference may be determined according to actual conditions, and may be 30 degrees, 45 degrees, 60 degrees, and the like. In this embodiment, the first pose may also be translated based on a probability grid map with an arbitrary resolution (e.g., a probability grid map with a minimum resolution), and the first pose is translated to the search box boundary from the first pose as a starting point to obtain translation candidate poses, where the N first candidate poses include: a rotation candidate pose and/or a translation candidate pose. In this embodiment, the obtained rotation poses may also be translated in the search box according to a probability grid map based on any resolution, and from the position where each rotation pose is located, the rotation poses may be translated to the search box boundary to generate a series of translated candidate poses.
As an alternative, by determining scores of candidate poses in probability grid maps under different resolutions, it is assumed that an original probability grid map of 8cm is downsampled by 4 layers to obtain a map with a resolution of 1.28m, and the downsampled layers are determined according to the size of a prior map. And similarly, 2-layer downsampling is carried out on the 2cm high-resolution map to obtain an 8cm high-resolution map, so that the result searched on the 8cm high-resolution priori map can be accurately transited to the 2cm high-resolution priori map.
The score S of all candidate poses (x, y, z) in each grid in the search box at the minimum resolution can be calculated based on the search range represented by the search box of the probability grid map at the minimum resolution i1 . Then finding out the optimal score S of all grids in the search range B . Fig. 5 is a schematic diagram of a branch-and-bound method according to an alternative embodiment of the present invention. The optimal pose is selected as the current pose of the robot by a branch and bound method, and the method specifically comprises the following steps:
hypothesis and optimal score S B The corresponding grid is G B Then with a grid G B And solving the upper bound of the leaf node of the subtree node as a father node. Namely pair G B Branching is performed to determine G B Of the four branches (down-sampled level 1) score the highest score SB S1 The corresponding grid is G B2 Then subsequently to G B2 Branching (downsampling layer 2), and so on, until the original map resolution (downsampling layer 4), determining the highest score SB of the leaf node at layer 4 S4 . At this time, the scores S of other nodes at the 3 rd layer of the subtree are compared i3 And SB S4 A comparison is made. If S is i3 <SB S4 If the sub-tree using the node as the father node is cut down, all the candidate frames under the node are not countedAnd calculating a pose score. If S is i3 >SB S4 Then Gsi is branched to determine whether Gsi has a node larger than Gs4 in the branch, and if yes, the upper bound is updated. And repeating the steps from the third layer, the second layer to the first layer, and updating the upper bound value B1 of the subtree T1. Then judging the score S of other nodes on the layer 1 i1 If less than B1, if S i1 <B1, then to S i1 And all the nodes and the whole subtree taking the nodes as father nodes are cut off to quickly remove the candidate frames. If Si1>B1, starting to branch Si1, and if the score of the branch node is less than B1, cutting off all subtrees taking the branch node as a father node; if the score of the branch node is larger than B1, continuing branching the branch node, and so on until the original resolution (until the layer 4), then judging whether the score of the leaf node of the layer is larger than B1, if so, updating the leaf node, otherwise, not performing any processing. And sequentially carrying out the same operation on other nodes on the layer 1 to finally obtain the optimal pose on the 8cm resolution prior map, and taking the obtained optimal pose as the current pose of the robot.
And c, based on the pose searched by the prior map with the resolution of 8cm, taking the pose as the center, the size of a search box is 0.2mx0.2m, the search angle is limited to be plus or minus 10 degrees, and on the prior map with the resolution of 2cm, the method in the step c is used for carrying out quick search to obtain the fine search on the map with the resolution of 2 cm. Wherein, 2 layers of the 2cm resolution map is downsampled. And (3) 2cm is subjected to down-sampling to obtain a map with 8cm resolution, and the pose searched by the 8cm coarse resolution priori map is subjected to accurate transition. By the branch delimitation method of the multi-resolution map, the search range on the high-resolution map is narrowed by using the search result on the coarse-resolution map, and the search speed on the high-resolution map is improved.
Optionally, the down-sampling the original probability grid map to obtain the set of probability grid maps with different resolutions includes: performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map; and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
As an optional implementation manner, the bidirectional sliding window includes an up-down sliding window and a left-right sliding window, for example, fig. 6 is a schematic diagram of a probability map obtained after the original probability grid map is slid left and right according to an optional embodiment of the present invention, and fig. 7 is a schematic diagram of a probability map obtained after the original probability grid map is slid up and down according to an optional embodiment of the present invention. The grid probability of the resolution map after the down sampling is the maximum value of all grids in the sliding window space corresponding to the original resolution map. Namely, the resolution pose score after the down sampling is ensured to be the upper bound of the original resolution pose score. And the pose score calculation refers to projecting the laser point cloud into a probability grid map based on the current pose, taking the probability value of the corresponding grid as the score of the current point cloud when the laser falls on the corresponding probability grid, and finally solving the average score of all the point clouds of the frame of point cloud data as the score of the current pose.
As an optional implementation mode, the pose of the robot at the moment t is recorded as x t =[x y θ] T Let the k-th laser data of the laser radar have the coordinate [ x ] relative to the robot coordinate system k,sens y k,sens θ k,sens ] T The k beam of laser data
Figure BDA0002681316720000111
The coordinates in the world coordinate system are:
Figure BDA0002681316720000112
wherein the content of the first and second substances,
Figure BDA0002681316720000113
is the coordinate of the k laser beam in the world coordinate system.
Optionally, the method further comprises: acquiring laser point cloud of the mobile equipment at the current pose; determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map; and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment in the current pose.
As an alternative embodiment, the environment in which the robot operates may change, requiring an update of the image library for visual repositioning. In this embodiment, in the robot positioning process, it is determined that the original probability grid map does not accord with the laser data of the current pose by judging whether the matching score between the current laser data and the prior map (original probability grid map) is greater than a certain threshold (for example, 40 points), and the original probability grid map may be updated according to the laser data of the current pose. In the embodiment, the accuracy of calculating the current pose potential can be improved by updating the prior map in real time.
Optionally, before the searching, in a pre-established image library, for a second image whose similarity to the first image is greater than or equal to a first preset threshold, the method further includes: when the mobile equipment determines that the moving distance or the rotating angle is greater than or equal to a first preset threshold value in the moving process of the original probability grid map, the pose of the mobile equipment is a key pose; establishing an association relation between a key frame acquired by the mobile equipment at the key pose and a corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
As an optional implementation, a probability grid map of the current robot operating environment is constructed based on laser simultaneous localization and mapping SLAM, and then a localization algorithm is run along a fixed route in the map, wherein the precision of the laser localization algorithm is higher than that of visual localization. In the process that the robot walks along a fixed route, a positioning algorithm estimates the current pose of the robot in real time, a key frame is selected according to whether the translation or rotation variation between the poses is larger than a certain threshold or time variation, extracted feature points and descriptors of the key frame are calculated and added into a visual feature library, and the threshold can be determined according to actual conditions.
As an optional embodiment, for the feature points extracted from the keyframe, the 3D point coordinates corresponding to the feature points are estimated by using a camera perspective projection model. Setting the pose of the key frame as T, the pixel coordinate of the extracted feature point on the frame as P, and the corresponding 3D point coordinate as P W And the camera internal reference is K, then:
Figure BDA0002681316720000121
in the above equation, the pose T and the pixel coordinate p of the keyframe are known, and the 3D point coordinate corresponding to the feature point can be obtained. And calculating a descriptor of each feature point for the extracted feature points, and adding the frame feature points into an image library. And for adjacent key frames, extracting feature points and calculating descriptors on the adjacent frames, searching for matching points according to the descriptors, and performing outlier rejection according to epipolar geometric constraints.
x 0 Ex 1 =0 (2)
Wherein: x is a radical of a fluorine atom 0 Representing the feature point normalization coordinates on the current frame matching; x is a radical of a fluorine atom 1 Representing feature point normalized coordinates on adjacent frame matches; e represents an essential matrix; and (3) regarding the feature points which can be matched on the current frame, using the 3D coordinates of the feature points corresponding to the camera of the previous frame as the 3D coordinates of the matched feature points of the frame, and calculating the corresponding 3D point coordinates of the feature points which are not matched by using a formula (1). In the later-stage visual repositioning, the pose of the current frame is estimated by using the pixel point coordinates and the corresponding 3D point coordinates matched in the current frame and the visual image library.
Optionally, the method further comprises: acquiring a third image shot by the mobile equipment at the current pose; determining the matching rate of a third image and the second image; and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
As an alternative embodiment, the environment in which the robot operates may change, requiring an update of the image library for visual repositioning. In this embodiment, in the robot positioning process, whether to update the image library is determined by judging whether the matching score of the current laser data and the prior map (original probability grid map) is greater than a certain threshold (for example, 40 points), where the threshold may be determined according to actual situations, and is not limited herein. And when the score is smaller than the threshold value, and the image frame which is closer to the current pose in the image library has low similarity with the current image, deleting the characteristics corresponding to the image frame which is close to the current frame in the image library, and adding the characteristics extracted by the current frame into the image library. In this embodiment, after the robot is determined to be at the current pose, the robot shoots an image at the current pose to obtain a third image. And comparing the third image shot at the current pose with the second image in the image library, if the matching rate is lower than a preset threshold value, indicating that the second image stored in the image library is not matched with the image shot at the current pose by the robot, and updating the second image stored in the image library into the image shot at the current pose by the robot. In this embodiment, the images in the image library are updated in real time, so that the effect of improving the accuracy of the initial estimation result of the current pose of the robot can be obtained.
Optionally, the method further comprises: if the second image is not found in the pre-established image library, controlling the mobile equipment to move for a preset distance or rotate for a second preset angle; acquiring a fourth image shot by the mobile equipment; searching a fifth image with the similarity greater than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose; rotating the second pose according to the first preset angle difference to obtain N second candidate poses, wherein N is an integer greater than or equal to 1, and the N second candidate poses comprise the second pose; determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
As an optional embodiment, when using the visual relocation, when the current frame image cannot find the image frame with high matching degree in the visual image library, firstly, on the basis of starting the robot obstacle avoidance function, the robot is rotated or moved, the view field of the robot is changed, and whether the image frame with high matching degree can be found in the image library is judged. And if the image frame with high similarity can be found, performing feature matching, eliminating outliers, and estimating the current pose as an initial value of the laser local relocation.
Figure BDA0002681316720000131
If the image frame with high similarity can not be found, the current initial pose is recurred by using the pose of a period of time before the loss of the positioning and the odometer data recorded in the period of time or using the pose when the machine is started and the odometer data moved after the machine is started, and the current initial pose is used as the initial value of the laser local relocation. Recording the pose x at the previous moment t-1 =[x y θ] T Wherein x and y are respectively horizontal and vertical coordinates in a world coordinate system and represent the positions of the robot on a two-dimensional plane where the x axis and the y axis are located, and theta represents the direction of the robot on the two-dimensional plane where the x axis and the y axis are located. The pose at the current moment can be predicted by utilizing the motion model. The equation of the motion model is shown in equation (4).
As an alternative implementation, FIG. 8 is a flow chart of vision and laser repositioning according to an alternative embodiment of the present invention. When the visual bag-of-word model is constructed, the key frame is selected according to the variation of the pose of the laser positioning based on the pose of the laser positioning, the visual image library is constructed conveniently, and the pose of the constructed image library is more accurate; when the working environment of the robot changes, the images in the image library are properly updated according to the laser positioning scores and the matching degree of the images acquired based on the current pose and the images with the similar poses in the image library, so that the success rate of visual repositioning under the condition of environmental change is improved.
The robot pose estimated by the vision repositioning has certain errors, the result of the vision repositioning is used as an initial value of the laser repositioning, the precision of the initial position of the robot is improved, the robot positioning is more quickly converged to an optimal value, and the positioning precision is ensured. When the positioning of the equipment is lost or the equipment is just started, the visual repositioning fails, the equipment stores the correct pose before the positioning is lost and a section of odometer data from the correct pose to the current pose or the pose before the equipment is shut down as initial values of the laser local repositioning, and the success rate of the repositioning and the speed of searching the pose are ensured. Or when the visual repositioning fails, on the basis of starting obstacle avoidance, the equipment rotates or arcs to change the visual field, simultaneously stores the odometer data of the route, and uses the result of the visual repositioning or the result recurred by the previous correct pose and inertial navigation information as the initial value of the laser repositioning, thereby ensuring the success of the repositioning.
The branch delimitation method based on the multi-resolution map is adopted to accelerate the speed of laser local repositioning, the coarse resolution map reduces the number of candidate solutions, and therefore the speed of laser repositioning is accelerated, and the map downsampling based on the coarse resolution map reduces the occupation of map memory; on the basis of the relocation search result of the coarse resolution map, the search range is reduced on the high resolution priori map, the search result on the high resolution map is quickly obtained, and the resolution of the high resolution priori map is consistent with that of the priori map for positioning the robot, so that the accuracy of local laser relocation is guaranteed.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a pose determination apparatus for a mobile device is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a configuration of a pose determination apparatus of a mobile device according to an embodiment of the present invention, as shown in fig. 9, the apparatus including: an obtaining module 92, configured to obtain a first image captured by a mobile device when it is determined that the mobile device is lost in positioning; a searching module 94, configured to search, in a pre-established image library, a second image with a similarity greater than or equal to a first preset threshold with the first image, and a first pose associated with the second image, where the second image is an image captured by the mobile device in the first pose; a rotation module 96, configured to perform rotation and/or translation processing on the first pose to obtain N first candidate poses, where N is an integer greater than or equal to 1, and the N first candidate poses include the first pose; a determining module 98, configured to determine the current pose of the mobile device according to the laser positioning scores of the N first candidate poses.
Optionally, the above apparatus is further configured to implement the determining the current pose of the mobile device according to the laser positioning scores of the N first candidate poses by: determining the current pose of the mobile equipment by using a branch and bound method and through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
Optionally, the apparatus is further configured to perform the down-sampling on the original probability grid map to obtain the set of probability grid maps with different resolutions by: performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map; and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
Optionally, the apparatus is further configured to acquire a laser point cloud of the mobile device at the current pose; determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map; and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment at the current pose.
Optionally, the apparatus is further configured to, before searching for a second image with a similarity greater than or equal to a first preset threshold with the first image in a pre-established image library, determine, in a moving process of the mobile device in the original probability grid map, that a moving distance or a rotation angle is greater than or equal to the first preset threshold, that the pose of the mobile device is a key pose; establishing an association relation between a key frame acquired by the mobile equipment at the key pose and a corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
Optionally, the apparatus is further configured to acquire a third image captured by the mobile device in the current pose; determining the matching rate of a third image and the second image; and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
Optionally, the apparatus is further configured to control the mobile device to move by a predetermined distance or rotate by a second preset angle if the second image is not found in the pre-established image library; acquiring a fourth image shot by the mobile equipment; searching a fifth image with the similarity of the fourth image being larger than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose; rotating the second pose according to the first preset angle difference to obtain N second candidate poses, wherein N is an integer greater than or equal to 1, and the N second candidate poses comprise the second pose; determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a first image shot by mobile equipment under the condition that the mobile equipment is determined to be lost in positioning;
s2, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
s3, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and S4, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a first image shot by mobile equipment under the condition that the mobile equipment is determined to be lost in positioning;
s2, searching a second image with the similarity larger than or equal to a first preset threshold value with the first image and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
s3, rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and S4, determining the current pose of the mobile equipment according to the laser positioning scores of the N first candidate poses. Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention shall be included in the protection scope of the present invention.

Claims (9)

1. A pose determination method of a mobile device, comprising:
under the condition that the mobile equipment is determined to be lost in positioning, acquiring a first image shot by the mobile equipment;
searching a second image with the similarity of the first image being larger than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment at the first pose;
rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
and determining the current pose of the mobile equipment by using a branch and bound method through scores of N groups of laser point clouds of the mobile equipment on N first candidate poses respectively falling into a group of probability grid maps with different resolutions, wherein the group of probability grid maps with different resolutions are obtained by down-sampling an original probability grid map, the original probability grid map comprises a plurality of grids, and a numerical value marked on each grid is used for representing the score of the laser falling onto the grid.
2. The method of claim 1, wherein the down-sampling the original probability grid map to obtain the set of probability grid maps of different resolutions comprises:
performing down-sampling on the original probability grid map through a bidirectional sliding window to obtain a probability grid map with a preset resolution, wherein the resolution of the probability grid map with the preset resolution is smaller than that of the original probability grid map;
and sequencing the probability grid map with the preset resolution and the original probability grid map according to the resolution to obtain a group of probability grid maps with different resolutions.
3. The method of claim 1, further comprising:
acquiring laser point cloud of the mobile equipment at the current pose;
determining the matching rate of the laser point cloud of the mobile device at the current pose and the original probability grid map;
and under the condition that the matching rate is smaller than or equal to a second preset threshold value, updating the original probability grid map according to the laser point cloud of the mobile equipment at the current pose.
4. The method according to claim 1, wherein before searching the pre-established image library for the second image having a similarity greater than or equal to the first image by a first preset threshold, the method further comprises:
when the mobile equipment determines that the moving distance or the rotating angle is greater than or equal to a first preset threshold value in the moving process of the original probability grid map, the pose of the mobile equipment is a key pose;
establishing an association relation between a key frame acquired by the mobile equipment at the key pose and a corresponding key pose, and storing the association relation in the image library, wherein the key pose comprises the first pose, and the key frame comprises the first image.
5. The method of claim 1, further comprising:
acquiring a third image shot by the mobile equipment at the current pose;
determining the matching rate of a third image and the second image;
and under the condition that the matching rate is smaller than or equal to a third preset threshold value, updating the second image in the image library to be the third image, and updating the first pose corresponding to the second image to be the current pose.
6. The method of claim 1, further comprising:
if the second image is not found in the pre-established image library, controlling the mobile equipment to move for a preset distance or rotate for a second preset angle;
acquiring a fourth image shot by the mobile equipment;
searching a fifth image with the similarity of the fourth image being larger than or equal to a first preset threshold value and a second pose associated with the fifth image in a pre-established image library, wherein the fifth image is an image shot by the mobile equipment in the second pose;
rotating the second poses according to a first preset angle difference to obtain N second candidate poses, wherein N is an integer larger than or equal to 1, and the N second candidate poses comprise the second poses;
determining a current pose of the mobile device according to the laser positioning scores of the N second candidate poses.
7. A pose determination apparatus of a mobile device, comprising:
the mobile equipment comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a first image shot by the mobile equipment under the condition that the mobile equipment is determined to be lost in positioning;
the searching module is used for searching a second image with the similarity degree of the second image being greater than or equal to a first preset threshold value and a first pose associated with the second image in a pre-established image library, wherein the second image is an image shot by the mobile equipment in the first pose;
the rotating module is used for rotating and/or translating the first pose to obtain N first candidate poses, wherein N is an integer greater than or equal to 1, and the N first candidate poses comprise the first pose;
a determining module, configured to determine a current pose of the mobile device according to the laser positioning scores of the N first candidate poses;
the determining module is further configured to determine the current pose of the mobile device through scores of N groups of laser point clouds of the mobile device in the N first candidate poses respectively falling on a group of probability grid maps with different resolutions, where the group of probability grid maps with different resolutions is obtained by down-sampling an original probability grid map, the original probability grid map includes multiple grids, and a numerical value identified on each grid is used to indicate a score of the laser falling on the grid.
8. A storage medium, in which a computer program is stored, wherein the program is executable by a terminal device or a computer to perform the method of any one of claims 1 to 6.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
CN202010963179.6A 2020-07-09 2020-09-14 Pose determination method and device of mobile equipment Active CN112179330B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010963179.6A CN112179330B (en) 2020-09-14 2020-09-14 Pose determination method and device of mobile equipment
EP20944546.9A EP4153940A4 (en) 2020-07-09 2020-12-30 Systems and methods for pose determination
KR1020237003646A KR20230029981A (en) 2020-07-09 2020-12-30 Systems and methods for pose determination
PCT/CN2020/141652 WO2022007367A1 (en) 2020-07-09 2020-12-30 Systems and methods for pose determination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010963179.6A CN112179330B (en) 2020-09-14 2020-09-14 Pose determination method and device of mobile equipment

Publications (2)

Publication Number Publication Date
CN112179330A CN112179330A (en) 2021-01-05
CN112179330B true CN112179330B (en) 2022-12-06

Family

ID=73920818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010963179.6A Active CN112179330B (en) 2020-07-09 2020-09-14 Pose determination method and device of mobile equipment

Country Status (1)

Country Link
CN (1) CN112179330B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111765884B (en) * 2020-06-18 2023-06-23 京东科技信息技术有限公司 Robot repositioning method, apparatus, electronic device and storage medium
EP4258078A4 (en) * 2021-01-13 2023-12-27 Huawei Technologies Co., Ltd. Positioning method and apparatus, and vehicle
CN114842069A (en) * 2021-01-30 2022-08-02 华为技术有限公司 Pose determination method and related equipment
CN112802103B (en) * 2021-02-01 2024-02-09 深圳万拓科技创新有限公司 Pose repositioning method, device, equipment and medium of laser sweeper
CN113029166A (en) * 2021-03-25 2021-06-25 上海商汤智能科技有限公司 Positioning method, positioning device, electronic equipment and storage medium
CN113124902B (en) * 2021-04-19 2024-05-14 追创科技(苏州)有限公司 Positioning correction method and device for mobile robot, storage medium and electronic device
CN113324537A (en) * 2021-04-27 2021-08-31 的卢技术有限公司 Vehicle pose acquisition method, vehicle positioning method and device, equipment and medium
CN113223077A (en) * 2021-05-21 2021-08-06 广州高新兴机器人有限公司 Method and device for automatic initial positioning based on vision-assisted laser
CN113129379A (en) * 2021-06-17 2021-07-16 同方威视技术股份有限公司 Global relocation method and device for automatic mobile equipment
CN113419249A (en) * 2021-06-18 2021-09-21 珠海市一微半导体有限公司 Repositioning method, chip and mobile robot
CN113701760B (en) * 2021-09-01 2024-02-27 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose diagram optimization
CN114061488B (en) * 2021-11-15 2024-05-14 华中科技大学鄂州工业技术研究院 Object measurement method, system and computer readable storage medium
CN114088099A (en) * 2021-11-18 2022-02-25 北京易航远智科技有限公司 Semantic relocation method and device based on known map, electronic equipment and medium
CN114216452B (en) * 2021-12-06 2024-03-19 北京云迹科技股份有限公司 Positioning method and device for robot
CN114890009A (en) * 2022-05-24 2022-08-12 上海永力信息科技股份有限公司 Unmanned garbage cleaning robot
CN117333638A (en) * 2022-06-22 2024-01-02 华为技术有限公司 Navigation, visual positioning and navigation map construction method and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107831765B (en) * 2017-10-23 2021-07-13 广州视源电子科技股份有限公司 Positioning method, device, equipment and storage medium
CN108717710B (en) * 2018-05-18 2022-04-22 京东方科技集团股份有限公司 Positioning method, device and system in indoor environment
CN110132284B (en) * 2019-05-30 2022-12-09 东北大学 Global positioning method based on depth information
CN110930453B (en) * 2019-10-30 2023-09-08 北京迈格威科技有限公司 Target object positioning method, target object positioning device and readable storage medium
CN111427047B (en) * 2020-03-30 2023-05-05 哈尔滨工程大学 SLAM method for autonomous mobile robot in large scene
CN111536964B (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Also Published As

Publication number Publication date
CN112179330A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112179330B (en) Pose determination method and device of mobile equipment
CN111536964B (en) Robot positioning method and device, and storage medium
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
US20200378767A1 (en) Methods, apparatus, and systems for localization and mapping
CN107990899B (en) Positioning method and system based on SLAM
US11313684B2 (en) Collaborative navigation and mapping
Lynen et al. Get out of my lab: Large-scale, real-time visual-inertial localization.
CN108279670B (en) Method, apparatus and computer readable medium for adjusting point cloud data acquisition trajectory
CN111209978B (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN111652929A (en) Visual feature identification and positioning method and system
CN114623817B (en) Self-calibration-contained visual inertial odometer method based on key frame sliding window filtering
CN111288971B (en) Visual positioning method and device
CN111983636A (en) Pose fusion method, pose fusion system, terminal, medium and mobile robot
CN111380515A (en) Positioning method and device, storage medium and electronic device
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN112393735A (en) Positioning method and device, storage medium and electronic device
CN114187418A (en) Loop detection method, point cloud map construction method, electronic device and storage medium
CN113096181A (en) Method and device for determining pose of equipment, storage medium and electronic device
CN115619954A (en) Sparse semantic map construction method, device, equipment and storage medium
CN114577216A (en) Navigation map construction method and device, robot and storage medium
CN110580737B (en) Image processing method, system and device with storage function
CN111583331A (en) Method and apparatus for simultaneous localization and mapping
CN117044478B (en) Mower control method and device, mower, electronic equipment and storage medium
CN116012624B (en) Positioning method, positioning device, electronic equipment, medium and automatic driving equipment
WO2023142353A1 (en) Pose prediction method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

Address before: No.1187 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information

Address after: 310053 floor 8, building a, No. 1181 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Zhejiang Huarui Technology Co.,Ltd.

Address before: C10, No. 1199 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG HUARAY TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant