CN113095227A - Robot positioning method and device, electronic equipment and storage medium - Google Patents

Robot positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113095227A
CN113095227A CN202110396871.XA CN202110396871A CN113095227A CN 113095227 A CN113095227 A CN 113095227A CN 202110396871 A CN202110396871 A CN 202110396871A CN 113095227 A CN113095227 A CN 113095227A
Authority
CN
China
Prior art keywords
robot
cloud data
point cloud
preset
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110396871.XA
Other languages
Chinese (zh)
Other versions
CN113095227B (en
Inventor
郭自强
张艳武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Original Assignee
Jingdong Shuke Haiyi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Shuke Haiyi Information Technology Co Ltd filed Critical Jingdong Shuke Haiyi Information Technology Co Ltd
Priority to CN202110396871.XA priority Critical patent/CN113095227B/en
Publication of CN113095227A publication Critical patent/CN113095227A/en
Application granted granted Critical
Publication of CN113095227B publication Critical patent/CN113095227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot positioning method and device, electronic equipment and a storage medium, and belongs to the technical field of robots. According to the method, when the robot is not located at the preset pile point, the positioning device is controlled to reposition; when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment; controlling the positioning device to perform relocation in a preset map when a selection operation of selecting an identification area in the preset map based on the first image is detected; when the repositioning is successful within the identified area, determining the repositioned position as the current position of the robot. The times of reconstructing the map can be reduced, the frequency of reconstructing the map is further reduced, and manpower and material resources are saved.

Description

Robot positioning method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method and an apparatus for positioning a robot, an electronic device, and a storage medium.
Background
In an unknown scene, the robot needs to have the capability of autonomous navigation to realize intelligent movement, the autonomous navigation of the robot depends on a map and the location of the position of the robot in the map, and the location of the robot is a process for determining the position of the robot in the map and is the key for realizing the capability of the autonomous navigation of the robot.
The environment of the robot needs to be detected and positioned by a positioning device of the robot when the robot is started, so that the position of the robot in the map is determined. In practical applications, the actual environment may change over time, so that the positioning device cannot determine the position of the robot in the map. For example, an obstacle is added to an area a in the map, and when the robot reaches the area a and the positioning device determines the position, the marker is different between the area a and the marker when the map is created, and it is further impossible to determine whether the robot is in the area a, and the positioning fails.
However, with the above positioning method, a map needs to be created again each time the actual environment changes and the positioning fails, so that the frequency of map reconstruction is high, and manpower and material resources are wasted.
Disclosure of Invention
An object of the embodiments of the present application is to provide a robot positioning method, apparatus, electronic device, and storage medium, so as to solve the problems of high frequency of map reconstruction and waste of manpower and material resources. The specific technical scheme is as follows:
in a first aspect, a robot positioning method is provided, the method comprising:
when the robot is not located at the preset pile point, controlling the positioning device to reposition;
when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment;
controlling the positioning device to perform relocation in a preset map when a selection operation of selecting an identification area in the preset map based on the first image is detected;
when the repositioning is successful within the identified area, determining the repositioned position as the current position of the robot.
In one possible embodiment, the controlling the positioning device to perform relocation within the identification area includes:
acquiring first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position;
determining whether a position corresponding to the first point cloud data exists in the preset map;
and if the position corresponding to the first point cloud data exists, determining that the relocation is successful.
In one possible embodiment, the method further comprises:
if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by using the first point cloud data and the first image;
presenting the at least one candidate location;
and if the selection operation of selecting the target position based on the at least one candidate position is detected, determining the target position as the current position of the robot.
In one possible embodiment, the method further comprises:
if the relocation fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset pile point;
after the robot is moved to a preset pile point, acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position;
repositioning based on the signal and the second point cloud data.
In one possible embodiment, the relocating based on the signal and the second point cloud data comprises:
judging whether the robot is connected with the preset pile points or not based on the signals;
if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point;
and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, the relocation is successful.
In one possible embodiment, the method further comprises:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to re-create the map.
In one possible embodiment, the method further comprises:
if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to issue a forced identification instruction;
and receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the instruction of the forced identification instruction.
In a second aspect, there is provided a robot positioning device, the device comprising:
the first control module is used for controlling the positioning device to reposition when the robot is not positioned at the preset pile point;
the second control module is used for controlling the acquisition device to acquire and display a first image of the surrounding environment when the relocation fails;
the third control module is used for controlling the positioning device to reposition in a preset map when detecting the selection operation of selecting an identification area in the preset map based on the first image;
and the first determination module is used for determining the relocated position as the current position of the robot when the relocation is successful in the identification area.
In one possible embodiment, the third control module includes:
an acquisition unit, configured to acquire first point cloud data of a first obstacle in the identification area detected by the positioning apparatus at a current position;
a first determining unit, configured to determine whether a location corresponding to the first point cloud data exists in the preset map;
and the second determining unit is used for determining that the relocation is successful if the position corresponding to the first point cloud data exists.
In one possible embodiment, the apparatus further comprises:
a second determining module, configured to determine at least one candidate location by using the first point cloud data and the first image if there is no location corresponding to the first point cloud data;
a presentation module for presenting the at least one candidate location;
a third determining module, configured to determine, if a selection operation for selecting a target position based on the at least one candidate position is detected, the target position as the current position of the robot.
In one possible embodiment, the apparatus further comprises:
the first sending module is used for sending first prompt information if relocation fails in the identification area, and the first prompt information is used for prompting a user to move the robot to a preset pile point;
the acquisition module is used for acquiring a signal returned by a preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position after the robot is moved to the preset pile point;
and the repositioning module is used for repositioning based on the signal and the second point cloud data.
In one possible embodiment, the relocation module includes:
the first judging unit is used for judging whether the robot is connected with the preset pile point or not based on the signal;
the second judgment unit is used for judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point or not if the robot is connected with the preset pile point;
and the third determining unit is used for successfully relocating the second point cloud data if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In one possible embodiment, the apparatus further comprises:
and the second sending module is used for sending second prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, wherein the second prompt information is used for prompting the user to re-create the map.
In one possible embodiment, the apparatus further comprises:
the third sending module is used for sending third prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, wherein the third prompt information is used for prompting the user to issue a forced identification instruction;
and the receiving module is used for receiving the forced identification instruction and determining the position of the preset pile point as the position of the robot according to the instruction of the forced identification instruction.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspect when executing a program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, wherein a computer program is stored in the computer-readable storage medium, and when executed by a processor, the computer program implements the method steps of any of the first aspects.
In a fifth aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform any of the above described robot positioning methods.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a robot positioning method and device, electronic equipment and a storage medium, wherein when a robot is not located at a preset pile point, firstly, a positioning device is controlled to reposition; when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment; then, when a selection operation of selecting an identification area in a preset map based on the first image is detected, controlling the positioning device to perform relocation in the identification area; finally, when the repositioning is successful in the identification area, determining the repositioned position as the current position of the robot.
According to the scheme, when the actual environment changes and positioning fails, the map cannot be directly rebuilt, whether the robot is located at the preset pile point or not is judged at first, when the robot is not located at the preset pile point, relocation is carried out, if relocation fails, a user is prompted to select the identification area so as to narrow the detection range, relocation is carried out again in the identification area, and therefore the probability of successful positioning based on the existing map can be improved, the number of times of rebuilding the map is reduced, the frequency of rebuilding the map is reduced, and manpower and material resources are saved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a robot positioning method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a robot positioning method according to another embodiment of the present disclosure;
fig. 3 is a flowchart of a robot positioning method according to another embodiment of the present application;
fig. 4 is a flowchart of a robot positioning method according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot positioning device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Because at present, when the actual environment changes to cause positioning failure, a map needs to be created again, so that the frequency of map reconstruction is high, and manpower and material resources are wasted.
The following will describe a robot positioning method provided in an embodiment of the present application in detail with reference to specific embodiments, as shown in fig. 1, the specific steps are as follows:
s101, when the robot is not located at the preset pile point, controlling the positioning device to reposition.
In this application embodiment, the preset stake point refers to a coordinate point where some specific articles marked when the robot first constructs a picture are located, for example, the position of the charging pile can be set as the stake point. When the robot is started every time, whether the robot is located at the preset pile point or not is judged firstly, and when the robot is connected with the preset pile point for charging and other operations, the preset pile point can send a signal connected with the robot. If the signal is not received, the robot is determined not to be located at the preset pile point, and when the robot is not located at the preset pile point, the positioning device is controlled to detect the surrounding environment for repositioning.
S102, when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment.
In the embodiment of the present application, the capturing device may include, but is not limited to, a camera carried by the robot itself, such as a camera. If the positioning device fails to reposition, the acquisition device is controlled to acquire a first image of the surrounding environment, and the first image is displayed to a user, wherein the user is generally an operation and maintenance person of the robot, and the display mode can be but is not limited to sending the first image to a terminal display device in communication connection with the acquisition device, such as a mobile phone or a computer.
S103, when the selection operation of selecting the identification area in the preset map based on the first image is detected, controlling the positioning device to reposition in the identification area.
In the embodiment of the present application, the preset map refers to a three-dimensional map, such as a point cloud map, established according to an actual environment in which the robot works. After receiving the first image, the user can manually select the identification area in the map based on the first image, so as to reduce the detection range of the positioning device and improve the probability of successful positioning. And if the selection operation of selecting the identification area in the preset map based on the first image is detected, controlling the positioning device to reposition in the identification area selected by the user.
For example, the a area includes a plurality of sub-areas, such as a1 and a2, only the a1 sub-area has an additional obstacle, and when the robot moves to the a2 sub-area, the detection device identifies the entire a area, and finds that the a area is different from the mark when the map is built, and the positioning fails. At this time, the first image is sent to the user, and after the user determines that the sub-area a2 is the identification area based on the first image, the positioning device only detects based on the sub-area a2, because no new obstacle is added to the sub-area a2, which is the same as the mark when the map is built, it can be determined that the robot is located at a position in the sub-area a2, and the positioning is successful.
And S104, when the relocation is successful in the identification area, determining the relocated position as the current position of the robot.
In the embodiment of the application, when the relocation is successful in the identification area, the relocated position is determined as the current position of the robot, so that the subsequent operation is facilitated.
In the embodiment of the application, when the actual environment changes to cause positioning failure, the map cannot be directly reconstructed, whether the robot is located at the preset pile point or not is judged at first, when the robot is not located at the preset pile point, relocation is carried out, when relocation fails, a user is prompted to select an identification area so as to reduce the detection range, relocation is carried out again in the identification area, and therefore the number of times of successful positioning can be increased, the number of times of map reconstruction is reduced, the frequency of map reconstruction is reduced, and manpower and material resources are saved.
In another embodiment of the present application, the S103 may include the following steps:
s201, first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position are obtained.
In the embodiment of the present application, point cloud data (point cloud data) refers to a set of vectors in a three-dimensional coordinate system, and position information can be determined by the point cloud data. The first obstacle is all objects or terrain in the identified area that impede the robot's action. Therefore, the positioning can be performed by acquiring the first point cloud data of the first obstacle in the identification area detected by the positioning device at the current position.
S202, determining whether a position corresponding to the first point cloud data exists in the preset map.
In the embodiment of the application, when the map is established, point cloud data corresponding to each position is recorded. Therefore, after the first point cloud data is measured, whether a position corresponding to the first point cloud data exists or not can be determined in the preset map, and therefore relocation can be conducted.
S203, if the position corresponding to the first point cloud data exists, determining that the relocation is successful.
In the embodiment of the application, if a position corresponding to the first point cloud data exists, the position is the current position of the robot, so that the robot can be determined to be successfully relocated.
In the embodiment of the application, the first point cloud data of the first obstacle in the identification area detected by the current position of the positioning device is used for repositioning, a user does not need to arrive at the site for assistance, and the positioning process is simple and efficient.
In yet another embodiment of the present application, the method may further comprise the steps of:
and S301, if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by using the first point cloud data and the first image.
In this embodiment of the application, if there is no position corresponding to the first point cloud data, it indicates that a scene of the recognition area changes, that is, an obstacle in the recognition area changes, at this time, the first point cloud data and the first image may be input into a preset underlying algorithm, and at least one candidate position may be determined.
S302, displaying the at least one candidate position.
In the embodiment of the application, the at least one candidate position can be sent to a preset terminal or a preset server for displaying, wherein the at least one candidate position can be displayed in a list form, so that a user can conveniently view the candidate position.
S303, if a selection operation for selecting a target position based on the at least one candidate position is detected, determining the target position as a current position of the robot.
In the embodiment of the application, after the user views the candidate positions, one of the candidate positions may be manually selected as the target position, and if a selection operation for selecting the target position based on at least one of the candidate positions is detected, the target position is determined as the current position of the robot, so that the positioning is completed.
In the embodiment of the application, when the robot cannot be automatically positioned, the user can complete positioning only by selecting the target position from the candidate positions displayed by the terminal as the current position of the robot, the robot does not need to arrive at the site for assistance, and the positioning process is simple and efficient.
In yet another embodiment of the present application, the method may further comprise the steps of:
s401, if the relocation fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset pile point.
In the embodiment of the application, if the relocation fails in the identification area, first prompt information can be sent to the terminal to prompt a user to move the robot to the preset pile point, the user can control the robot to move through remote control, and the robot can be moved after the motor of the robot is powered off.
S402, after the robot is moved to a preset pile point, a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position are acquired.
In this embodiment, after the robot is moved to the preset stub point, the preset stub point may return a signal to the terminal. And acquiring a signal returned by the preset pile point, and acquiring second point cloud data of a second obstacle detected by the positioning device at the current position.
And S403, repositioning based on the signal and the second point cloud data.
In the embodiment of the application, after the signal returned by the preset pile point and the second point cloud data are obtained, the relocation can be performed based on the signal and the second point cloud data.
In the embodiment of the application, when relocation fails in an identification area, a map is not directly reconstructed, a first prompt message is sent to a terminal to prompt a user to move a robot to a preset pile point, and after the robot is moved to the preset pile point, a signal returned by the preset pile point and second point cloud data of a second obstacle detected by a positioning device at the current position are acquired; finally, repositioning is performed based on the signal and the second point cloud data. Therefore, the probability of successful positioning can be improved, the times of reconstructing the map are reduced, and manpower and material resources are saved.
In another embodiment of the present application, the S403 may include the following steps:
step one, judging whether the robot is connected with the preset pile points or not based on the signals.
In this application embodiment, whether the robot is connected with the preset pile point or not can be judged based on a signal returned by the preset pile point, and whether the robot is located at the preset pile point or not can be determined by judging whether the robot is connected with the preset pile point or not.
And step two, if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In this embodiment, if the robot is connected to the preset pile point, it indicates that the robot is located at the preset pile point, and at this time, it needs to determine whether the second point cloud data is consistent with the point cloud data corresponding to the position where the preset pile point is located. And determining whether the scene corresponding to the position of the preset pile point changes or not by judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
And step three, if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, the relocation is successful.
In the embodiment of the application, if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, it indicates that the scene corresponding to the position of the preset pile point is not changed, and it is determined that the relocation is successful.
In the embodiment of the application, whether the robot is connected with the preset pile point or not can be judged based on the signal; if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point; and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, the relocation is successful. By adopting the scheme to reposition, the probability of successful positioning can be improved, the times of reconstructing the map can be reduced, and manpower and material resources are saved.
In yet another embodiment of the present application, the method may further comprise the steps of:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to re-create the map.
In this embodiment of the application, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, it indicates that the scene corresponding to the position of the preset pile point has changed, and the scene corresponding to the position of the preset pile point may affect the next work of the robot due to the change of the scene. At this time, second prompt information can be sent to the terminal to prompt the user to recreate the map. According to the scheme, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, the user is prompted to create the map again, and the influence on the follow-up work of the robot due to the change of the scene corresponding to the position of the preset pile point can be avoided.
In yet another embodiment of the present application, the method may further comprise the steps of:
step one, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to issue a forced identification instruction.
In the embodiment of the application, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, it indicates that the scene corresponding to the position of the preset pile point has changed, at this time, third prompt information is sent to the terminal, and the user is prompted to issue a forced identification instruction, wherein the forced identification instruction is used for indicating that the position of the preset pile point is determined as the position of the robot.
And step two, receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the instruction of the forced identification instruction.
In the embodiment of the application, after the user issues the forced identification instruction, the forced identification instruction is received, and the position of the preset pile point is determined as the position of the robot according to the instruction of the forced identification instruction, so that the relocation is completed, the times of reconstructing the map is reduced, and manpower and material resources are saved.
In the embodiment of the application, when the actual environment changes to cause positioning failure, the map cannot be directly reconstructed, whether the robot is located at the preset pile point or not is judged at first, when the robot is not located at the preset pile point, relocation is carried out, when relocation fails, a user is prompted to select an identification area so as to reduce the detection range, relocation is carried out again in the identification area, and therefore the number of times of successful positioning can be increased, the number of times of map reconstruction is reduced, the frequency of map reconstruction is reduced, and manpower and material resources are saved.
Based on the same technical concept, embodiments of the present application further provide a robot positioning apparatus, as shown in fig. 5, the apparatus includes:
the first control module 501 is used for controlling the positioning device to reposition when the robot is not located at a preset pile point;
the second control module 502 is configured to control the acquisition device to acquire and display a first image of a surrounding environment when relocation fails;
a third control module 503, configured to control the positioning apparatus to perform relocation within the identification area when a selection operation of selecting the identification area within a preset map based on the first image is detected;
a first determining module 504 for determining the relocated position as the current position of the robot when the relocation is successful within the identified area.
In one possible embodiment, the third control module includes:
an acquisition unit, configured to acquire first point cloud data of a first obstacle in the identification area detected by the positioning apparatus at a current position;
a first determining unit, configured to determine whether a location corresponding to the first point cloud data exists in the preset map;
and the second determining unit is used for determining that the relocation is successful if the position corresponding to the first point cloud data exists.
In one possible embodiment, the apparatus further comprises:
a second determining module, configured to determine at least one candidate location by using the first point cloud data and the first image if there is no location corresponding to the first point cloud data;
a presentation module for presenting the at least one candidate location;
a third determining module, configured to determine, if a selection operation for selecting a target position based on the at least one candidate position is detected, the target position as the current position of the robot.
In one possible embodiment, the apparatus further comprises:
the first sending module is used for sending first prompt information if relocation fails in the identification area, and the first prompt information is used for prompting a user to move the robot to a preset pile point;
the acquisition module is used for acquiring a signal returned by a preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position after the robot is moved to the preset pile point;
and the repositioning module is used for repositioning based on the signal and the second point cloud data.
In one possible embodiment, the relocation module includes:
the first judging unit is used for judging whether the robot is connected with the preset pile point or not based on the signal;
the second judgment unit is used for judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point or not if the robot is connected with the preset pile point;
and the third determining unit is used for successfully relocating the second point cloud data if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In one possible embodiment, the apparatus further comprises:
and the second sending module is used for sending second prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, wherein the second prompt information is used for prompting the user to re-create the map.
In one possible embodiment, the apparatus further comprises:
the third sending module is used for sending third prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, wherein the third prompt information is used for prompting the user to issue a forced identification instruction;
and the receiving module is used for receiving the forced identification instruction and determining the position of the preset pile point as the position of the robot according to the instruction of the forced identification instruction.
In the embodiment of the application, when the actual environment changes to cause positioning failure, the map cannot be directly reconstructed, whether the robot is located at the preset pile point or not is judged at first, when the robot is not located at the preset pile point, relocation is carried out, when relocation fails, a user is prompted to select an identification area so as to reduce the detection range, relocation is carried out again in the identification area, and therefore the number of times of successful positioning can be increased, the number of times of map reconstruction is reduced, the frequency of map reconstruction is reduced, and manpower and material resources are saved.
Based on the same technical concept, an embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to implement the following steps when executing the program stored in the memory 603:
when the robot is not located at the preset pile point, controlling the positioning device to reposition;
when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment;
controlling the positioning device to perform relocation in a preset map when a selection operation of selecting an identification area in the preset map based on the first image is detected;
when the repositioning is successful within the identified area, determining the repositioned position as the current position of the robot.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, implements the steps of any of the above-mentioned robot positioning methods.
In a further embodiment provided by the present invention, there is also provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the robot positioning methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A robot positioning method, comprising:
when the robot is not located at the preset pile point, controlling the positioning device to reposition;
when the relocation fails, controlling the acquisition device to acquire and display a first image of the surrounding environment;
controlling the positioning device to perform relocation in a preset map when a selection operation of selecting an identification area in the preset map based on the first image is detected;
when the repositioning is successful within the identified area, determining the repositioned position as the current position of the robot.
2. The method of claim 1, wherein said controlling said positioning device to reposition within said identified area comprises:
acquiring first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position;
determining whether a position corresponding to the first point cloud data exists in the preset map;
and if the position corresponding to the first point cloud data exists, determining that the relocation is successful.
3. The method of claim 2, further comprising:
if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by using the first point cloud data and the first image;
presenting the at least one candidate location;
and if the selection operation of selecting the target position based on the at least one candidate position is detected, determining the target position as the current position of the robot.
4. The method of claim 1, further comprising:
if the relocation fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset pile point;
after the robot is moved to a preset pile point, acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position;
repositioning based on the signal and the second point cloud data.
5. The method of claim 4, wherein the relocating based on the signal and the second point cloud data comprises:
judging whether the robot is connected with the preset pile points or not based on the signals;
if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point;
and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, the relocation is successful.
6. The method of claim 5, further comprising:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to re-create the map.
7. The method of claim 5, further comprising:
if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to issue a forced identification instruction;
and receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the instruction of the forced identification instruction.
8. A robot positioning device, characterized in that the device comprises:
the first control module is used for controlling the positioning device to reposition when the robot is not positioned at the preset pile point;
the second control module is used for controlling the acquisition device to acquire and display a first image of the surrounding environment when the relocation fails;
the third control module is used for controlling the positioning device to reposition in a preset map when detecting the selection operation of selecting an identification area in the preset map based on the first image;
and the determining module is used for determining the relocated position as the current position of the robot when the relocation is successful in the identification area.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202110396871.XA 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium Active CN113095227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110396871.XA CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110396871.XA CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113095227A true CN113095227A (en) 2021-07-09
CN113095227B CN113095227B (en) 2023-11-07

Family

ID=76677006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110396871.XA Active CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113095227B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113478488A (en) * 2021-07-14 2021-10-08 上海擎朗智能科技有限公司 Robot repositioning method and device, electronic equipment and storage medium
CN114012725A (en) * 2021-11-05 2022-02-08 深圳拓邦股份有限公司 Robot repositioning method, system, robot and storage medium
WO2023202256A1 (en) * 2022-04-22 2023-10-26 追觅创新科技(苏州)有限公司 Coordinate-based repositioning method and system, and cleaning robot

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090088516A (en) * 2008-02-15 2009-08-20 한국과학기술연구원 Method for self-localization of a robot based on object recognition and environment information around the recognized object
US20100324773A1 (en) * 2007-06-28 2010-12-23 Samsung Electronics Co., Ltd. Method and apparatus for relocating mobile robot
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106323273A (en) * 2016-08-26 2017-01-11 深圳微服机器人科技有限公司 Robot relocation method and device
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
CN109986561A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot long-distance control method, device and storage medium
CN109993794A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot method for relocating, device, control equipment and storage medium
CN111578946A (en) * 2020-05-27 2020-08-25 杭州蓝芯科技有限公司 Laser navigation AGV repositioning method and device based on deep learning
US20210063577A1 (en) * 2019-08-26 2021-03-04 Ubtech Robotics Corp Ltd Robot relocalization method and apparatus and robot using the same

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100324773A1 (en) * 2007-06-28 2010-12-23 Samsung Electronics Co., Ltd. Method and apparatus for relocating mobile robot
KR20090088516A (en) * 2008-02-15 2009-08-20 한국과학기술연구원 Method for self-localization of a robot based on object recognition and environment information around the recognized object
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106323273A (en) * 2016-08-26 2017-01-11 深圳微服机器人科技有限公司 Robot relocation method and device
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN109986561A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot long-distance control method, device and storage medium
CN109993794A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot method for relocating, device, control equipment and storage medium
US20210063577A1 (en) * 2019-08-26 2021-03-04 Ubtech Robotics Corp Ltd Robot relocalization method and apparatus and robot using the same
CN111578946A (en) * 2020-05-27 2020-08-25 杭州蓝芯科技有限公司 Laser navigation AGV repositioning method and device based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马跃龙;曹雪峰;陈丁;李登峰;蒋秉川;: "一种基于点云地图的机器人室内实时重定位方法", 系统仿真学报, no. 1, pages 15 - 23 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113478488A (en) * 2021-07-14 2021-10-08 上海擎朗智能科技有限公司 Robot repositioning method and device, electronic equipment and storage medium
CN114012725A (en) * 2021-11-05 2022-02-08 深圳拓邦股份有限公司 Robot repositioning method, system, robot and storage medium
CN114012725B (en) * 2021-11-05 2023-08-08 深圳拓邦股份有限公司 Robot repositioning method, system, robot and storage medium
WO2023202256A1 (en) * 2022-04-22 2023-10-26 追觅创新科技(苏州)有限公司 Coordinate-based repositioning method and system, and cleaning robot

Also Published As

Publication number Publication date
CN113095227B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN113095227A (en) Robot positioning method and device, electronic equipment and storage medium
CN110442120B (en) Method for controlling robot to move in different scenes, robot and terminal equipment
CN103841374A (en) Display method and system for video monitoring image
EP4283567A1 (en) Three-dimensional map construction method and apparatus
US11055107B2 (en) Electronic apparatus and method of executing application program
US20240069550A1 (en) Method for processing abnormality of material pushing robot, device, server, and storage medium
CN104159296A (en) Location method and device
CN112790669A (en) Sweeping method and device of sweeper and storage medium
CN113741446B (en) Robot autonomous exploration method, terminal equipment and storage medium
CN110046009B (en) Recording method, recording device, server and readable storage medium
CN105159528A (en) Picture content display method and mobile terminal
WO2020024845A1 (en) Positioning method and apparatus
CN111240622B (en) Drawing method and device
CN109144379B (en) Method for operating terminal, terminal detection device, system and storage medium
CN113440054B (en) Method and device for determining range of charging base of sweeping robot
CN111136689B (en) Self-checking method and device
CN114359548A (en) Circle searching method and device, electronic equipment and storage medium
CN112766138A (en) Positioning method, device and equipment based on image recognition and storage medium
CN106484625A (en) A kind of method based on universal test Software Development Platform test subsystems
CN111457924A (en) Indoor map processing method and device, electronic equipment and storage medium
CN111724621A (en) Vehicle searching system, method, computer readable storage medium and client
CN113032055A (en) Terminal display interface control method and device, computer equipment and storage medium
CN111475018B (en) Control method and device of sweeping robot, sweeping robot and electronic equipment
WO2023155732A1 (en) Area information processing method and apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant