CN110207710B - Robot repositioning method and device - Google Patents

Robot repositioning method and device Download PDF

Info

Publication number
CN110207710B
CN110207710B CN201910561961.2A CN201910561961A CN110207710B CN 110207710 B CN110207710 B CN 110207710B CN 201910561961 A CN201910561961 A CN 201910561961A CN 110207710 B CN110207710 B CN 110207710B
Authority
CN
China
Prior art keywords
local
robot
map
target
local map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910561961.2A
Other languages
Chinese (zh)
Other versions
CN110207710A (en
Inventor
廖景亮
李树仁
王运志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dog Intelligent Robot Technology Co ltd
Original Assignee
Beijing Dog Intelligent Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dog Intelligent Robot Technology Co ltd filed Critical Beijing Dog Intelligent Robot Technology Co ltd
Priority to CN201910561961.2A priority Critical patent/CN110207710B/en
Publication of CN110207710A publication Critical patent/CN110207710A/en
Application granted granted Critical
Publication of CN110207710B publication Critical patent/CN110207710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Abstract

The invention relates to the technical field of robots, in particular to a robot repositioning method and a robot repositioning device. The method comprises the following steps: and acquiring environment information through the camera and radar data to construct a local map and generate a local information table. Acquiring a global information table corresponding to a global map machine, determining a target local map corresponding to the local map in the global map according to the corresponding relation between the local information table and the global table information, and selecting an effective error area according to an error parameter between the local map and the target local map; and determining target radar data with the highest similarity to the current radar data of the robot by acquiring a radar data set of a plurality of selected pixel points in the effective error area, and finally determining the target pose of the robot according to the target pose corresponding to the target radar data. By applying the method, the current position of the robot can be quickly and accurately positioned, and the robot can be ensured to accurately position and judge the surrounding environment in the working process.

Description

Robot repositioning method and device
Technical Field
The invention relates to the technical field of robots, in particular to a robot repositioning method and a robot repositioning device.
Background
With the rapid development of scientific technology and the improvement of living standard of people, more and more work can be completed by using a robot in real life, particularly simple and repetitive work such as carrying heavy objects, cleaning and the like. When the robot is started to work or the robot works, the robot needs to be accurately positioned so as to ensure the high efficiency of the work.
However, in the prior art, the method for repositioning the robot basically adopts a laser radar method or a monocular vision algorithm for repositioning. The robot repositioning is carried out by adopting a laser radar method, the laser radar is used for scanning the surrounding environment to carry out violence search, and the pose of the robot is determined roughly. The robot is repositioned by adopting a monocular vision algorithm, two-dimensional codes need to be deployed in the environment, and the approximate position of the robot is determined by scanning the two-dimensional codes to perform feature matching; however, the monocular vision algorithm can only be used in an environment where a specific two-dimensional code is attached, and the application range of the robot is limited. Therefore, no matter the laser radar method or the monocular vision algorithm is adopted, the repositioning error of the pose of the robot is large, the robot cannot accurately position the robot in the working process, and the misjudgment rate of the surrounding environment is improved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a robot repositioning method, by which a robot at an initial moment can be repositioned, so that the real pose of the robot is determined, and the accurate positioning and judgment of the surrounding environment in the working process of the robot are ensured.
The invention also provides a robot repositioning device for ensuring the realization and application of the method in practice.
A robot repositioning method, comprising:
when the robot is started, calling a preset camera and a laser radar to acquire the current environment information of the robot;
according to the current environment information of the robot, constructing a local map with the current position of the robot as an origin, and generating a local information table corresponding to the local map;
acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
determining, based on the error parameter, that the robot is in an effective error region in the target local map;
selecting a plurality of pixel points in the effective error area, and determining a radar data set corresponding to each pixel point in the global map;
determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
and acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
Optionally, the method for acquiring the current environmental information of the robot by calling the preset camera and the laser radar includes:
calling a preset camera to obtain object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
Optionally, the method for generating the local information table corresponding to the local map includes:
determining local coordinates of each of the identified objects in the local map based on the relative position of the respective identified object with respect to the robot;
and generating a local information table corresponding to the local map according to the object information corresponding to the identified object and the local coordinate of each identified object in the local map.
Optionally, the above method, where the matching is performed on the local map and the target local map to obtain an error parameter between the local map and the target local, includes:
determining each local object contained in the target local map, wherein each local object corresponds to each identified object in the local map one to one;
generating a target local information table corresponding to the target local map based on the global information table, wherein the target local information table comprises real coordinates of each local object in the global map;
calculating the relative distance between each local object in the target local map according to the real coordinate of each local object in the global map;
calculating the relative distance between the recognized objects in the local map according to the local coordinates of each recognized object in the local map;
and matching and calculating the relative distance between the local objects and the relative distance between the recognized objects to obtain an error parameter between the local map and the target local.
The method above, optionally, the determining, based on the error parameter, that the robot is in an effective error area in the global map includes:
mapping an origin in the local map into the target local map, and determining a mapping point of the origin mapped to the target local map;
and determining an effective error area which takes the mapping point as a circle center and the error parameter as a radius based on the mapping point and the error parameter, wherein the effective error area is an area of all possible positions of the robot in the global map.
Optionally, in the method, the determining a radar data set corresponding to each pixel point in the global map includes:
determining a preset search range of the robot, and setting a search angle required for radar search of each pixel point according to the search range;
and determining the number of times of radar search required by each pixel point according to the search range and the search angle, searching the current environment of each pixel point according to the number of times of search, and obtaining radar data of each pixel point in the global map corresponding to each search angle so as to determine a radar data set corresponding to each pixel point in the global map.
A robotic relocating device comprising:
the first acquisition unit is used for calling a preset camera and a laser radar to acquire the current environment information of the robot when the robot is started;
the generating unit is used for constructing a local map with the current position of the robot as an origin according to the current environment information of the robot and generating a local information table corresponding to the local map;
the second acquisition unit is used for acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
the first matching unit is used for matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
a first determination unit, configured to determine, based on the error parameter, that the robot is in an effective error area in the target local map;
the second determining unit is used for selecting a plurality of pixel points in the effective error area and determining a radar data set corresponding to each pixel point in the global map;
the second matching unit is used for determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
and the third determining unit is used for acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
The above apparatus, optionally, comprises:
the acquisition subunit is used for calling a preset camera, acquiring object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and the first determining subunit is used for determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
The above apparatus, optionally, the generating unit includes:
a second determining subunit, configured to determine local coordinates of each of the identified objects in the local map based on a relative position of each of the identified objects with respect to the robot;
and the first generation subunit is used for generating a local information table corresponding to the local map according to the object information corresponding to the identified objects and the local coordinates of each identified object in the local map.
The above apparatus, optionally, the first matching unit includes:
a third determining subunit, configured to determine local objects included in the target local map, where the local objects correspond to recognized objects in the local map one to one;
a second generating subunit, configured to generate, based on the global information table, a target local information table corresponding to the target local map, where the target local information table includes real coordinates of each local object in the global map;
the first calculation subunit is used for calculating the relative distance between the local objects in the target local map according to the real coordinates of each local object in the global map;
the second calculating subunit is used for calculating the relative distance between the identified objects in the local map according to the local coordinates of each identified object in the local map;
and the matching subunit is used for performing matching calculation on the relative distance between the local objects and the relative distance between the identified objects to obtain an error parameter between the local map and the target local.
A storage medium comprising stored instructions, wherein the instructions, when executed, control an apparatus in which the storage medium is located to perform the above-described robot relocation method.
An electronic device comprising a memory, and one or more instructions, wherein the one or more instructions are stored in the memory and configured to be executed by one or more processors to perform the robot repositioning method described above.
Compared with the prior art, the invention has the following advantages:
the invention provides a robot repositioning method, which comprises the following steps: when the robot is started, calling a preset camera and a laser radar to acquire the current environment information of the robot; according to the current environment information of the robot, constructing a local map with the current position of the robot as an origin, and generating a local information table corresponding to the local map; acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table; matching the local map with the target local map to obtain an error parameter between the local map and the target local map; determining, based on the error parameter, that the robot is in an effective error region in the target local map; selecting a plurality of pixel points in the effective error area, and determining a radar data set corresponding to each pixel point in the global map; determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data; and acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot. By applying the method provided by the invention, the environmental information is acquired through the camera and the radar data so as to construct the local map and the local information table. And finally, determining the real pose of the robot through the effective error area, quickly and accurately positioning the current position of the robot, and ensuring that the robot accurately positions and judges the surrounding environment in the working process.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method of a robot repositioning method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method of a robot repositioning method according to an embodiment of the present invention;
FIG. 3 is a flowchart of another method of a robot repositioning method according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a robot relocating device according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and the terms "comprises", "comprising", or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The invention is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
The embodiment of the present invention provides a method, which can be applied to a variety of system platforms, and an execution subject of the method can be a processor arranged inside a robot, or other processors connected with a computer terminal of the robot or various mobile devices, and a flow chart of the method is shown in fig. 1, and specifically includes:
s101: when the robot is started, calling a preset camera and a laser radar to acquire the current environment information of the robot;
in the embodiment of the invention, when the robot is started, the correct pose of the robot at the initial moment cannot be known because the robot has global uncertainty at the initial moment. Therefore, a preset camera and a preset laser radar are called to acquire the current environment information of the robot.
It should be noted that the camera may be a monocular camera; the lidar may be a single line lidar.
S102: according to the current environment information of the robot, constructing a local map with the current position of the robot as an origin, and generating a local information table corresponding to the local map;
in the embodiment of the invention, because the specific position of the robot is not determined, a local map with the current position of the robot as the origin can be constructed according to the current environment information of the robot, and a local information table corresponding to the local map is generated according to the environment information. Wherein the local map may be a local semantic map; the local information table may be a local semantic information table.
S103: acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
in the embodiment of the invention, the robot internally stores the global map in advance, and after the local map is constructed and the local information table is generated, the preset global map and the preset global information table are obtained. The global information table records object information of each object in the global map and a specific position in the global map. And determining a target local map corresponding to the local map in the global map according to the corresponding relation between the local information table and the global information table.
The local map belongs to a part of the area in the global map, and the specific position of the local map in the global map cannot be known due to uncertainty of the robot at the initial time. And selecting a target local map corresponding to the local map in the global map according to the corresponding relation between the local information table and the global information table.
Further, the local information table and the global information table have a corresponding relationship, and the object information contained in the local information table can be searched in the global information table to obtain corresponding object information.
S104: matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
in the embodiment of the invention, after the target local map corresponding to the local map is selected in the global map, the local map is matched with the target local map to obtain the matched error parameter.
It should be noted that, because there is a certain device error between the camera and the laser radar, and there is a certain error when acquiring the environment information, there is also an error in the process of constructing the local map, and after matching the local map with the target local map, the error parameter after matching is determined.
S105: determining, based on the error parameter, that the robot is in an effective error region in the target local map;
in the embodiment of the invention, because an error exists between the local map and the target local map, the robot has an error when the position on the local map is mapped to the position in the target local map. And determining an effective error area of the robot in the target local map according to the error parameters.
It should be noted that, due to the existence of errors, any position in the effective error area may be the true effective position of the robot.
S106: selecting a plurality of pixel points in the effective error area, and determining a radar data set corresponding to each pixel point in the global map;
in the embodiment of the invention, a plurality of pixel points are selected in the effective error area, wherein each pixel point is possibly the real position of the robot. Therefore, the corresponding radar data set of each pixel point in the global map can be determined.
It should be noted that each pixel point may have a plurality of radar data, and the plurality of radar data of each pixel point constitute a radar data set. The number of the radar data of each pixel point is the same.
S107: determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
in the embodiment of the invention, the current radar data of the robot is determined according to the environment information which is acquired originally. And matching the current radar data of the robot with a plurality of radar data corresponding to each pixel point one by one, and obtaining target radar data with the highest matching similarity from each radar data.
It should be noted that the current radar data of the robot is the only radar data corresponding to the current pose of the robot.
S108: and acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
In the embodiment of the invention, after the target radar data with the highest similarity after matching is obtained, the target pixel points corresponding to the radar data and the target pose corresponding to the target radar data in the target pixel points are obtained. And finally, determining the target pose as the real pose of the robot, and finishing the relocation of the robot.
In the robot repositioning method provided by the embodiment of the invention, when the robot is started, the environment information of the robot is obtained through the camera and the laser radar, a local map with the current position of the robot as the origin is constructed through the environment information, and a local information table is generated. And determining the corresponding relation between the local information table and a preset global information table, and determining a target local map corresponding to the local map in a preset global map. And matching the local map with the target local map, and determining the error parameters of the local map and the target local map at the same time. And determining an effective error area of the robot in the target local map according to the error parameter. And selecting a plurality of pixel points in the effective error area, and acquiring a radar data set corresponding to each pixel point. And matching the current radar data of the robot with each radar data in each radar data set, and determining the target radar data with the highest similarity after matching. And determining the target pose corresponding to the target radar data as the real pose of the robot, and finishing the relocation of the robot.
It should be noted that the local map adopts the same construction scale as the global map, so as to ensure that the scale does not change during matching, and reduce errors during matching.
By applying the method provided by the embodiment of the invention, the environmental information is obtained through the camera and the radar data so as to construct the local map and the local information table. And finally, determining the real pose of the robot through the effective error area, quickly and accurately positioning the current position of the robot, and ensuring that the robot accurately positions and judges the surrounding environment in the working process.
In the method provided by the embodiment of the present invention, based on the step S101, the method for acquiring the current environmental information of the robot by calling a preset camera and a laser radar specifically includes:
calling a preset camera to obtain object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
In the method provided by the embodiment of the present invention, based on the content of the above embodiment and step S102, the generating a local information table corresponding to the local map specifically includes:
determining local coordinates of each of the identified objects in the local map based on the relative position of the respective identified object with respect to the robot;
and generating a local information table corresponding to the local map according to the object information corresponding to the identified object and the local coordinate of each identified object in the local map.
In the method provided based on the above embodiment, the process of constructing the local map and generating the local information table is shown in fig. 2, and includes:
s201: acquiring object information corresponding to each identified object in the surrounding environment of the robot by using a camera;
in the embodiment of the invention, the specific information of the recognized object around the robot, such as the shape, color and name of the object, is known through the camera.
S202: determining the relative position of each identified object relative to the robot by using the laser radar;
in the embodiment of the present invention, the position of each recognized object with respect to the robot, i.e., the distance, angle, etc. between the robots, is determined by the laser radar.
S203: according to the object information corresponding to each identified object and the relative position of each identified object relative to the robot, constructing a local map with the robot as an origin, and determining local coordinates of each identified object in the local map;
in the embodiment of the invention, a local map with the same measurement scale as the global map is constructed, and the coordinates of each identified object in the local map are determined.
S204: and generating a local information table corresponding to the local map according to the object information of each identified object and the local coordinates in the local map.
In the embodiment of the invention, a local information table with object information and local coordinates corresponding to each recognized object is generated.
In the robot repositioning method provided by the embodiment of the invention, the camera is used for identifying the objects in the surrounding environment of the robot, and the object information corresponding to each identified object is determined. The object information includes a name of the object, a shape of the object, a color of the object, and the like. After the object information of each identified object in the current environment of the robot is determined, the relative position of each identified object relative to the robot is determined through a preset laser radar. And determining the current environment information of the robot according to the object information corresponding to each identified object and the relative position of each identified object relative to the robot. When the local map is constructed, the local map is constructed according to the ring, and the local coordinates of each individual recognition object are determined in the local map. A local information table is generated from object information corresponding to each recognized object and local coordinates in a local map.
The process of constructing the local map comprises the following steps: firstly, initializing a gray scale image with a pixel value of 127 representing an unknown area; the point where the robot is located is an origin O (0, 0), firstly, object information of each recognized object in the unknown area is obtained through a camera, then, a point P (x0, y0) of the recognized object is obtained through a laser radar, the area pixel value between line segments (0, 0) - (x0, y0) is assigned to be 255 to represent an open area, and the pixel value of the point P (x0, y0) is assigned to be 0 to represent the recognized object; the same operation is carried out on all the identified objects obtained by scanning radar data, and a local map centered on the robot can be obtained. The recognition module in the camera is used for recognizing the objects in the environment, and the recognition module adopts a deep learning algorithm to train and recognize the objects. When an accurate object is identified, the coordinate position of the laser radar in the measurement data of the corresponding angle is selected according to the angle of the center of the identified object, and the coordinate position is used as the local coordinate of the identified object on the map and is recorded in the local information table.
By applying the method provided by the embodiment of the invention, the environment information of the robot is obtained through the camera and the laser radar, and the local information table is generated according to the environment information, so that the current position of the robot can be further determined according to the local information, the current position of the robot can be rapidly and accurately positioned, and the robot can be ensured to accurately position and judge the surrounding environment in the working process.
In the method provided in the embodiment of the present invention, based on the content of the above embodiment and step S104, the matching is performed on the local map and the target local map to obtain an error parameter between the local map and the target local map, as shown in fig. 3, the method specifically includes:
s301: determining each local object contained in the target local map, wherein each local object corresponds to each identified object in the local map one to one;
in the embodiment of the present invention, since the local information table and the global information table have a corresponding relationship, after a target local map corresponding to the local map in the global map is determined, each local object included in the target local map is determined according to the global information table. And the local objects in the target local map correspond to the identified object information in the local map one by one. For example, if the identified object in the local map includes a white table and a white chair, the target local map also includes a white table and a white chair.
S302: generating a target local information table corresponding to the target local map based on the global information table, wherein the target local information table comprises real coordinates of each local object in the global map;
in the embodiment of the invention, the target local information table corresponding to the target local map is generated based on the global information table. The local object contained in the target local information table has real coordinates in the global map.
S303: calculating the relative distance between each local object in the target local map according to the real coordinate of each local object in the global map;
in the embodiment of the invention, the relative distance between each local object is calculated according to the real coordinate of each local object in the target local map. For example, if the local object comprises object A, B, C, then the distance from A to B and then from B to C is calculated: A-B-C.
S304: calculating the relative distance between the recognized objects in the local map according to the local coordinates of each recognized object in the local map;
in the embodiment of the invention, the relative distance between the recognized objects is calculated according to the real coordinates of the recognized objects in the local map. For example, if the local objects include objects a1, B1, C1, then the distance from a1 to B1 and then from B1 to C1 is calculated: A1-B1-C1.
S305: and matching and calculating the relative distance between the local objects and the relative distance between the recognized objects to obtain an error parameter between the local map and the target local.
In the embodiment of the invention, the relative distance between each local object and the relative distance between each identified object are matched and calculated, and the error parameter between the local object and the identified object is determined.
In the robot repositioning method provided by the embodiment of the invention, local objects contained in a target local map are determined, and each local object in the target local map corresponds to each identified object in the local map one by one. And generating a target local information table through the global information table. Calculating the relative distance between each local object according to the real coordinate of each local object in the target local information table; the relative distance between the recognized objects is calculated by the local coordinates of the recognized objects in the local information table. And matching and calculating the relative distance between each local object and the relative distance between each identified object to obtain an error parameter between the local map and the target local. For example: the target local map comprises a first local object A and a second local object B, and the local map also comprises a first recognized object A1 and a second recognized object B1; when the target local information table is T _ global and the local information table is T _ local; for two different objects A and B in T _ local, calculating the actual distance D _ AB _ local of A and B, calculating two corresponding identified objects A1 and B1 in T _ global, calculating the actual distance D _ A1B1_ global of A1 and B1, and performing matching calculation on D _ A1B1_ global and D _ AB _ local to obtain an error parameter e.
When there are more than two recognized objects in the local map, only two different recognized objects may be selected to calculate a first distance between the two selected recognized objects, and then two local objects that are identical to the other recognized object selected in the local map in the target local map are selected correspondingly to calculate a second distance between the two selected local objects. And calculating the difference between the first distance and the second distance to obtain the error parameters corresponding to the first distance and the second distance.
Alternatively, based on the step S103, in the process of selecting the target local map corresponding to the local map in the global map according to the corresponding relationship between the local information table and the global information table, the target local map corresponding to the local map may be selected in the global map by calculating the distance between the objects. For example: calculating the intersection of the object sets in the global information table and the local information table to obtain a new global information table and a new local information table which only contain intersection objects; the global information table contains object information of the same object as the local information table, and also contains object information of other objects. Then, aiming at any two different classes of objects A and B in the new global information table, calculating the actual distance D _ AB between A and B, searching any two corresponding classes of objects A 'and B' in the local information table, calculating the actual distance D _ A 'B' between A 'and B', and if the absolute value of the difference between D _ A 'B' and D _ AB is smaller than a preset threshold value, judging that AB and A 'B' are likely to be the same pair of objects in the environment; then, the transformation relation between the local map and the global map can be obtained through the coordinate transformation relation between AB and A 'B', and the corresponding target local map of the local map in the global map can be directly calculated according to the transformation relation.
By applying the method provided by the embodiment of the invention, the error parameters between the local map and the target local are obtained through matching calculation, so that the robot can be more accurately positioned according to the error parameters, and the robot can be ensured to accurately position and judge the surrounding environment in the working process.
In the method provided by the embodiment of the present invention, based on the step S106, the determining, based on the error parameter, that the robot is located in the effective error area in the global map specifically includes:
mapping an origin in the local map into the target local map, and determining a mapping point of the origin mapped to the target local map;
and determining an effective error area which takes the mapping point as a circle center and the error parameter as a radius based on the mapping point and the error parameter, wherein the effective error area is an area of all possible positions of the robot in the global map.
In the robot repositioning method provided by the embodiment of the invention, after the position of the local map in the global map is determined to be the target local map, the original point in the local map is mapped into the target local map according to the corresponding relation between the local map and the target local map, and the mapping point of the original point in the local map after being mapped into the target local map is determined. Since the target local map belongs to the global map, the actual coordinates of the mapped point can be determined in the global map. However, because an error exists between the local map and the target local map, an effective error area which takes the mapping point as the center of a circle and takes the error parameter as the radius is determined according to the error parameter between the local map and the global map and the actual coordinate of the mapping point. The effective error area is an area where the robot is located at all possible positions in the global map, that is, because an error exists between the target local map and the local map, a certain error also exists when the position where the robot is located in the local map is mapped to the target local map.
By applying the method provided by the embodiment of the invention, the effective error area of the robot in the global map is determined through the mapping points and the error parameters, and the processes of the steps S106 to S108 are executed according to the effective error area to accurately position the robot.
In the method provided in the embodiment of the present invention, based on step S107, the determining a radar data set corresponding to each pixel point in the global map includes:
determining a preset search range of the robot, and setting a search angle required for radar search of each pixel point according to the search range;
and determining the number of times of radar search required by each pixel point according to the search range and the search angle, searching the current environment of each pixel point according to the number of times of search, and obtaining radar data of each pixel point in the global map corresponding to each search angle so as to determine a radar data set corresponding to each pixel point in the global map.
In the robot repositioning method provided by the embodiment of the invention, because the pose of the robot in the whole situation has uncertainty, the search angle of each pixel point needing radar search can be determined through the preset search range. Wherein, the searching range can be 180 °, 270 °, 360 ° or the like; the search angle required to be searched can be determined according to the search range, for example, the search angle can be 0.1 ° or 1 ° or the like; and each pixel point is used for simulating the real position of the robot in the global map. And determining the searching times of each pixel point needing radar searching according to the searching range and the searching angle. For example, the search range is 360 °, the search angle is 0.1 °, one pixel point needs to perform 3600 radar searches, and the search angle of 0.1 ° is changed every time the radar is searched. And searching the current environment of each pixel point according to the searching times, and obtaining radar data obtained after radar searching is carried out on each pixel point in the global map according to the searching angle so as to determine a radar data set corresponding to each pixel point in the global map.
It should be noted that the radar data set corresponds to one pixel point, and each radar data set includes a plurality of radar data. For example, if the search range is 360 ° and the search angle is 0.1 °, one pixel needs to perform 3600 radar searches, and the search angle of 0.1 ° is changed every time a search is performed, so 3600 radar data can be obtained. Wherein, radar searching is carried out by the laser radar every time radar searching is carried out.
By applying the method provided by the embodiment of the invention, the environment in which each pixel point is located is accurately searched according to the preset search range and search angle, so that radar data is more accurate, and the robot is more accurately positioned.
The specific implementation procedures and derivatives thereof of the above embodiments are within the scope of the present invention.
Corresponding to the method described in fig. 1, an embodiment of the present invention further provides a robot relocating device, which is used for implementing the method in fig. 1 specifically, the robot relocating device provided in the embodiment of the present invention may be applied to a computer terminal or various mobile devices, and a schematic structural diagram of the robot relocating device is shown in fig. 4, and specifically includes:
a first obtaining unit 401, configured to, when the robot is started, call a preset camera and a laser radar to obtain environment information where the robot is currently located;
a generating unit 402, configured to construct a local map with a current location of the robot as an origin according to the current environmental information of the robot, and generate a local information table corresponding to the local map;
a second obtaining unit 403, configured to obtain a preset global map and a global information table corresponding to the global map, and select a target local map corresponding to the local map in the global map based on a correspondence between the local information table and the global information table;
a first matching unit 404, configured to match the local map with the target local map, so as to obtain an error parameter between the local map and the target local map;
a first determining unit 405, configured to determine, based on the error parameter, that the robot is in an effective error area in the target local map;
a second determining unit 406, configured to select a plurality of pixel points in the effective error region, and determine a radar data set corresponding to each pixel point in the global map;
a second matching unit 407, configured to determine current radar data in the current environment information where the robot is located, match the current radar data with each radar data in each radar data set, and obtain target radar data with the highest similarity to the current radar data;
a third determining unit 408, configured to obtain a target pose of a target position pixel point corresponding to the target radar data, determine that the target pose is a real pose of the robot, and complete repositioning of the robot.
In the apparatus provided in the embodiment of the present invention, the first obtaining unit 401 includes:
the acquisition subunit is used for calling a preset camera, acquiring object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and the first determining subunit is used for determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
In the apparatus provided in the embodiment of the present invention, the generating unit 402 includes:
a second determining subunit, configured to determine local coordinates of each of the identified objects in the local map based on a relative position of each of the identified objects with respect to the robot;
and the first generation subunit is used for generating a local information table corresponding to the local map according to the object information corresponding to the identified objects and the local coordinates of each identified object in the local map.
In the apparatus provided in the embodiment of the present invention, the first matching unit 404 includes:
a third determining subunit, configured to determine local objects included in the target local map, where the local objects correspond to recognized objects in the local map one to one;
a second generating subunit, configured to generate, based on the global information table, a target local information table corresponding to the target local map, where the target local information table includes real coordinates of each local object in the global map;
the first calculation subunit is used for calculating the relative distance between the local objects in the target local map according to the real coordinates of each local object in the global map;
the second calculating subunit is used for calculating the relative distance between the identified objects in the local map according to the local coordinates of each identified object in the local map;
and the matching subunit is used for performing matching calculation on the relative distance between the local objects and the relative distance between the identified objects to obtain an error parameter between the local map and the target local.
In the apparatus provided in the embodiment of the present invention, the first determining unit 405 includes:
a fourth determining subunit, configured to map an origin in the local map into the target local map, and determine a mapping point at which the origin is mapped to the target local map;
a fifth determining subunit, configured to determine, based on the mapping point and the error parameter, an effective error area with the mapping point as a center of a circle and the error parameter as a radius, where the effective error area is an area where the robot is located at all possible positions in the global map.
In the apparatus provided in the embodiment of the present invention, the second determining unit 406 includes:
a sixth determining subunit, configured to determine a search range preset by the robot, and set a search angle at which each pixel needs to be searched by a radar according to the search range;
and the searching subunit is used for determining the number of times of radar searching required to be performed on each pixel point according to the searching range and the searching angle, searching the current environment of each pixel point according to the number of times of searching, and obtaining radar data, corresponding to each searching angle, of each pixel point in the global map so as to determine a radar data set, corresponding to each pixel point, in the global map.
For specific working processes of the first obtaining unit 401, the generating unit 402, the second obtaining unit 403, the first matching unit 404, the first determining unit 405, the second determining unit 406, the second matching unit 407, and the third determining unit 408 in the robot repositioning device disclosed in the above embodiment of the present invention, reference may be made to corresponding contents in the robot repositioning method disclosed in the above embodiment of the present invention, and details are not described here again.
The embodiment of the invention also provides a storage medium, which comprises stored instructions, wherein when the instructions are executed, the equipment where the storage medium is located is controlled to execute the robot re-orientation method.
An electronic device is provided in an embodiment of the present invention, and the structural diagram of the electronic device is shown in fig. 5, which specifically includes a memory 501 and one or more instructions 502, where the one or more instructions 502 are stored in the memory 501, and are configured to be executed by one or more processors 503 to perform the following operations according to the one or more instructions 502:
when the robot is started, calling a preset camera and a laser radar to acquire the current environment information of the robot;
according to the current environment information of the robot, constructing a local map with the current position of the robot as an origin, and generating a local information table corresponding to the local map;
acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
determining, based on the error parameter, that the robot is in an effective error region in the target local map;
selecting a plurality of pixel points in the effective error area, and determining a radar data set corresponding to each pixel point in the global map;
determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
and acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A robot repositioning method, applied to a robot, comprising:
when the robot is started, calling a preset camera and a laser radar to acquire the current environment information of the robot;
according to the current environment information of the robot, constructing a local map with the current position of the robot as an origin, and generating a local information table corresponding to the local map;
acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
determining, based on the error parameter, that the robot is in an effective error region in the target local map;
selecting a plurality of pixel points in the effective error area, and determining a radar data set corresponding to each pixel point in the global map;
determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
and acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
2. The method of claim 1, wherein the invoking of the preset camera and the laser radar to obtain the current environment information of the robot comprises:
calling a preset camera to obtain object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
3. The method according to claim 2, wherein the generating the local information table corresponding to the local map comprises:
determining local coordinates of each of the identified objects in the local map based on the relative position of the respective identified object with respect to the robot;
and generating a local information table corresponding to the local map according to the object information corresponding to the identified object and the local coordinate of each identified object in the local map.
4. The method of claim 3, wherein matching the local map with the target local map to obtain an error parameter between the local map and the target local comprises:
determining each local object contained in the target local map, wherein each local object corresponds to each identified object in the local map one to one;
generating a target local information table corresponding to the target local map based on the global information table, wherein the target local information table comprises real coordinates of each local object in the global map;
calculating the relative distance between each local object in the target local map according to the real coordinate of each local object in the global map;
calculating the relative distance between the recognized objects in the local map according to the local coordinates of each recognized object in the local map;
and matching and calculating the relative distance between the local objects and the relative distance between the recognized objects to obtain an error parameter between the local map and the target local.
5. The method of claim 1, wherein the determining that the robot is in an effective error zone in the global map based on the error parameter comprises:
mapping an origin in the local map into the target local map, and determining a mapping point of the origin mapped to the target local map;
and determining an effective error area which takes the mapping point as a circle center and the error parameter as a radius based on the mapping point and the error parameter, wherein the effective error area is an area of all possible positions of the robot in the global map.
6. The method of claim 1, wherein the determining the corresponding radar data set of each pixel point in the global map comprises:
determining a preset search range of the robot, and setting a search angle required for radar search of each pixel point according to the search range;
and determining the number of times of radar search required by each pixel point according to the search range and the search angle, searching the current environment of each pixel point according to the number of times of search, and obtaining radar data of each pixel point in the global map corresponding to each search angle so as to determine a radar data set corresponding to each pixel point in the global map.
7. A robotic relocating device, comprising:
the first acquisition unit is used for calling a preset camera and a laser radar to acquire the current environment information of the robot when the robot is started;
the generating unit is used for constructing a local map with the current position of the robot as an origin according to the current environment information of the robot and generating a local information table corresponding to the local map;
the second acquisition unit is used for acquiring a preset global map and a global information table corresponding to the global map, and selecting a target local map corresponding to the local map from the global map based on the corresponding relation between the local information table and the global information table;
the first matching unit is used for matching the local map with the target local map to obtain an error parameter between the local map and the target local map;
a first determination unit, configured to determine, based on the error parameter, that the robot is in an effective error area in the target local map;
the second determining unit is used for selecting a plurality of pixel points in the effective error area and determining a radar data set corresponding to each pixel point in the global map;
the second matching unit is used for determining current radar data in the current environment information of the robot, matching the current radar data with each radar data in each radar data set, and obtaining target radar data with the highest similarity to the current radar data;
and the third determining unit is used for acquiring the target pose of a target position pixel point corresponding to the target radar data, determining that the target pose is the real pose of the robot, and finishing the relocation of the robot.
8. The apparatus of claim 7, the first acquisition unit, comprising:
the acquisition subunit is used for calling a preset camera, acquiring object information corresponding to each identified object in the surrounding environment of the robot, and determining the relative position of each identified object relative to the robot through a preset laser radar;
and the first determining subunit is used for determining the current environment information of the robot based on the object information corresponding to each identified object and the relative position of each identified object relative to the robot.
9. The apparatus of claim 8, wherein the generating unit comprises:
a second determining subunit, configured to determine local coordinates of each of the identified objects in the local map based on a relative position of each of the identified objects with respect to the robot;
and the first generation subunit is used for generating a local information table corresponding to the local map according to the object information corresponding to the identified objects and the local coordinates of each identified object in the local map.
10. The apparatus of claim 7, wherein the first matching unit comprises:
a third determining subunit, configured to determine local objects included in the target local map, where the local objects correspond to recognized objects in the local map one to one;
a second generating subunit, configured to generate, based on the global information table, a target local information table corresponding to the target local map, where the target local information table includes real coordinates of each local object in the global map;
the first calculation subunit is used for calculating the relative distance between the local objects in the target local map according to the real coordinates of each local object in the global map;
the second calculating subunit is used for calculating the relative distance between the identified objects in the local map according to the local coordinates of each identified object in the local map;
and the matching subunit is used for performing matching calculation on the relative distance between the local objects and the relative distance between the identified objects to obtain an error parameter between the local map and the target local.
CN201910561961.2A 2019-06-26 2019-06-26 Robot repositioning method and device Active CN110207710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561961.2A CN110207710B (en) 2019-06-26 2019-06-26 Robot repositioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561961.2A CN110207710B (en) 2019-06-26 2019-06-26 Robot repositioning method and device

Publications (2)

Publication Number Publication Date
CN110207710A CN110207710A (en) 2019-09-06
CN110207710B true CN110207710B (en) 2021-03-16

Family

ID=67794736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561961.2A Active CN110207710B (en) 2019-06-26 2019-06-26 Robot repositioning method and device

Country Status (1)

Country Link
CN (1) CN110207710B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716209B (en) * 2019-09-19 2021-12-14 浙江大华技术股份有限公司 Map construction method, map construction equipment and storage device
CN111460065B (en) * 2020-03-26 2023-12-01 杭州海康威视数字技术股份有限公司 Positioning method and device of radar in map
CN111678516B (en) * 2020-05-08 2021-11-23 中山大学 Bounded region rapid global positioning method based on laser radar
CN111693053B (en) * 2020-07-09 2022-05-06 上海大学 Repositioning method and system based on mobile robot
CN112231427A (en) * 2020-10-13 2021-01-15 上海美迪索科电子科技有限公司 Map construction method, device, medium and equipment
CN112269386B (en) * 2020-10-28 2024-04-02 深圳拓邦股份有限公司 Symmetrical environment repositioning method, symmetrical environment repositioning device and robot
CN112509027B (en) * 2020-11-11 2023-11-21 深圳市优必选科技股份有限公司 Repositioning method, robot, and computer-readable storage medium
CN114088099A (en) * 2021-11-18 2022-02-25 北京易航远智科技有限公司 Semantic relocation method and device based on known map, electronic equipment and medium
CN114383622B (en) * 2021-12-27 2024-04-19 广州视源电子科技股份有限公司 Robot positioning method, robot, and computer-readable storage medium
CN114674307B (en) * 2022-05-26 2022-09-27 苏州魔视智能科技有限公司 Repositioning method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941264A (en) * 2014-03-26 2014-07-23 南京航空航天大学 Positioning method using laser radar in indoor unknown environment
JP2017097402A (en) * 2015-11-18 2017-06-01 株式会社明電舎 Surrounding map preparation method, self-location estimation method and self-location estimation device
CN108981701A (en) * 2018-06-14 2018-12-11 广东易凌科技股份有限公司 A kind of indoor positioning and air navigation aid based on laser SLAM
CN109141437A (en) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 A kind of robot global method for relocating
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8594923B2 (en) * 2011-06-14 2013-11-26 Crown Equipment Limited Method and apparatus for sharing map data associated with automated industrial vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941264A (en) * 2014-03-26 2014-07-23 南京航空航天大学 Positioning method using laser radar in indoor unknown environment
JP2017097402A (en) * 2015-11-18 2017-06-01 株式会社明電舎 Surrounding map preparation method, self-location estimation method and self-location estimation device
CN108981701A (en) * 2018-06-14 2018-12-11 广东易凌科技股份有限公司 A kind of indoor positioning and air navigation aid based on laser SLAM
CN109144056A (en) * 2018-08-02 2019-01-04 上海思岚科技有限公司 The global method for self-locating and equipment of mobile robot
CN109141437A (en) * 2018-09-30 2019-01-04 中国科学院合肥物质科学研究院 A kind of robot global method for relocating

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NAVIS-An UGV Indoor Positioning System Using Laser Scan Matching for Large-Area Real-Time Applications;Tang Jian,etc;《SENSORS》;20140131;第14卷(第7期);第11805-11824页 *
一种基于点云地图的机器人室内实时重定位方法;马跃龙,等;《系统仿真学报》;20171231;第29卷;第15-29页 *
激光雷达SLAM技术及其在无人车中的应用研究进展;李晨曦,等;《北京联合大学学报》;20171031;第31卷(第4期);第61-69页 *
移动机器人同时定位与地图构建方法研究;陈群英;《自动化与仪器仪表》;20180531(第223期);第31-34页 *

Also Published As

Publication number Publication date
CN110207710A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110207710B (en) Robot repositioning method and device
CN108875524B (en) Sight estimation method, device, system and storage medium
CN109118542B (en) Calibration method, device, equipment and storage medium between laser radar and camera
WO2019040997A9 (en) Method and system for use in performing localisation
CN111094895B (en) System and method for robust self-repositioning in pre-constructed visual maps
US9613328B2 (en) Workflow monitoring and analysis system and method thereof
CN111239763A (en) Object positioning method and device, storage medium and processor
JP2014106597A (en) Autonomous moving body, object information acquisition device, and object information acquisition method
Almeida et al. Detection of data matrix encoded landmarks in unstructured environments using deep learning
Kostoeva et al. Indoor 3D interactive asset detection using a smartphone
CN110728172A (en) Point cloud-based face key point detection method, device and system and storage medium
CN111652057A (en) Map construction method and device, computer equipment and storage medium
CN108805121B (en) License plate detection and positioning method, device, equipment and computer readable medium
US11654573B2 (en) Methods and systems for enabling human robot interaction by sharing cognition
CN113613188B (en) Fingerprint library updating method, device, computer equipment and storage medium
CN115049744A (en) Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
CN115205806A (en) Method and device for generating target detection model and automatic driving vehicle
CN115307641A (en) Robot positioning method, device, robot and storage medium
CN113960999A (en) Mobile robot repositioning method, system and chip
CN116266402A (en) Automatic object labeling method and device, electronic equipment and storage medium
CN110555909B (en) Power transmission tower model construction method, device, computer equipment and storage medium
EP2889724B1 (en) System and method for selecting features for identifying human activities in a human-computer interacting environment
CN111708046A (en) Method and device for processing plane data of obstacle, electronic equipment and storage medium
CN111462341A (en) Augmented reality construction assisting method, device, terminal and medium
RU2759773C1 (en) Method and system for determining the location of the user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Robot relocation method and device

Effective date of registration: 20210907

Granted publication date: 20210316

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: Beijing Dog Intelligent Robot Technology Co.,Ltd.

Registration number: Y2021990000811