CN113095227B - Robot positioning method and device, electronic equipment and storage medium - Google Patents

Robot positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113095227B
CN113095227B CN202110396871.XA CN202110396871A CN113095227B CN 113095227 B CN113095227 B CN 113095227B CN 202110396871 A CN202110396871 A CN 202110396871A CN 113095227 B CN113095227 B CN 113095227B
Authority
CN
China
Prior art keywords
robot
cloud data
point cloud
point
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110396871.XA
Other languages
Chinese (zh)
Other versions
CN113095227A (en
Inventor
郭自强
张艳武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202110396871.XA priority Critical patent/CN113095227B/en
Publication of CN113095227A publication Critical patent/CN113095227A/en
Application granted granted Critical
Publication of CN113095227B publication Critical patent/CN113095227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application provides a robot positioning method, a robot positioning device, electronic equipment and a storage medium, and belongs to the technical field of robots. When the robot is not positioned at the preset pile point, the positioning device is controlled to reposition; when the repositioning fails, controlling the acquisition device to acquire a first image of the surrounding environment and displaying the first image; when detecting a selection operation of selecting an identification area in a preset map based on the first image, controlling the positioning device to reposition in the identification area; when the repositioning is successful within the identification area, the repositioned position is determined as the current position of the robot. The number of times of reconstructing the map can be reduced, so that the frequency of reconstructing the map is reduced, and manpower and material resources are saved.

Description

Robot positioning method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a method and apparatus for positioning a robot, an electronic device, and a storage medium.
Background
In an unknown scene, the robot must have autonomous navigation capability in order to realize intelligent movement, the autonomous navigation of the robot depends on a map and the positioning of the position in the map, and the positioning of the robot is a process for determining the position of the robot in the map and is a key for realizing the autonomous navigation capability of the robot.
And each time the robot is started, the environment is detected and positioned by using a positioning device of the robot, so that the position of the robot in the map is determined. In practical application, the practical environment may change with time, so that the positioning device cannot determine the position of the robot in the map. For example, when the robot reaches the area a, the positioning device finds that the area a is different from the mark when the map is built when the positioning device determines the position, and further it cannot be determined whether the robot is in the area a, and the positioning is failed, at this time, the map generally needs to be re-built according to the change condition of the actual environment.
However, with the positioning method, when the positioning is failed due to the change of the actual environment, the map needs to be re-created, so that the frequency of reconstructing the map is high, and manpower and material resources are wasted.
Disclosure of Invention
The embodiment of the application aims to provide a robot positioning method, a robot positioning device, electronic equipment and a storage medium, so as to solve the problem that the frequency of reconstructing a map is high and manpower and material resources are wasted. The specific technical scheme is as follows:
in a first aspect, a robot positioning method is provided, the method comprising:
when the robot is not positioned at the preset pile point, the positioning device is controlled to reposition;
when the repositioning fails, controlling the acquisition device to acquire a first image of the surrounding environment and displaying the first image;
when detecting a selection operation of selecting an identification area in a preset map based on the first image, controlling the positioning device to reposition in the identification area;
when the repositioning is successful within the identification area, the repositioned position is determined as the current position of the robot.
In one possible embodiment, the controlling the positioning device to reposition within the identification area includes:
acquiring first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position;
determining whether a position corresponding to the first point cloud data exists in the preset map;
and if the position corresponding to the first point cloud data exists, determining that repositioning is successful.
In one possible embodiment, the method further comprises:
if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by utilizing the first point cloud data and a first image;
displaying the at least one candidate location;
and if a selection operation of selecting a target position based on the at least one candidate position is detected, determining the target position as the current position of the robot.
In one possible embodiment, the method further comprises:
if repositioning fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset stake point;
after the robot is moved to a preset pile point, acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position;
repositioning is performed based on the signal and the second point cloud data.
In one possible implementation, the repositioning based on the signal and the second point cloud data includes:
judging whether the robot is connected with the preset pile point or not based on the signal;
if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point;
and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, successful repositioning is achieved.
In one possible embodiment, the method further comprises:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to recreate the map.
In one possible embodiment, the method further comprises:
if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to give a forced identification instruction;
and receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the indication of the forced identification instruction.
In a second aspect, there is provided a robotic positioning device, the device comprising:
the first control module is used for controlling the positioning device to reposition when the robot is not positioned at a preset pile point;
the second control module is used for controlling the acquisition device to acquire a first image of the surrounding environment and display the first image when the repositioning fails;
the third control module is used for controlling the positioning device to reposition in the identification area when detecting the selection operation of selecting the identification area in a preset map based on the first image;
and the first determining module is used for determining the relocated position as the current position of the robot when the relocation is successful in the identification area.
In one possible embodiment, the third control module includes:
an acquisition unit, configured to acquire first point cloud data of a first obstacle in the identification area detected by the positioning device at a current position;
a first determining unit, configured to determine, in the preset map, whether a location corresponding to the first point cloud data exists;
and the second determining unit is used for determining that the repositioning is successful if the position corresponding to the first point cloud data exists.
In one possible embodiment, the apparatus further comprises:
a second determining module, configured to determine, if there is no location corresponding to the first point cloud data, at least one candidate location using the first point cloud data and the first image;
a display module for displaying the at least one candidate location;
and a third determining module, configured to determine, if a selection operation for selecting a target position based on the at least one candidate position is detected, the target position as a current position of the robot.
In one possible embodiment, the apparatus further comprises:
the first sending module is used for sending first prompt information if repositioning fails in the identification area, wherein the first prompt information is used for prompting a user to move the robot to a preset stake point;
the acquisition module is used for acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position after the robot is moved to the preset pile point;
and the repositioning module is used for repositioning based on the signal and the second point cloud data.
In one possible embodiment, the relocation module comprises:
the first judging unit is used for judging whether the robot is connected with the preset pile point or not based on the signals;
the second judging unit is used for judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point if the robot is connected with the preset pile point;
and the third determining unit is used for successfully repositioning if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In one possible embodiment, the apparatus further comprises:
the second sending module is configured to send second prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position where the preset pile point is located, where the second prompt information is used to prompt the user to recreate the map.
In one possible embodiment, the apparatus further comprises:
the third sending module is used for sending third prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, and the third prompt information is used for prompting the user to give a forced identification instruction;
and the receiving module is used for receiving the forced identification instruction and determining the position of the preset pile point as the position of the robot according to the indication of the forced identification instruction.
In a third aspect, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of the first aspects when executing a program stored on a memory.
In a fourth aspect, a computer-readable storage medium is provided, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of the first aspects.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform any of the robot positioning methods described above.
The embodiment of the application has the beneficial effects that:
the embodiment of the application provides a robot positioning method, a device, electronic equipment and a storage medium, wherein when a robot is not positioned at a preset pile point, the positioning device is controlled to reposition; when the repositioning fails, the acquisition device is controlled to acquire a first image of the surrounding environment, and the first image is displayed; then, when a selection operation of selecting an identification area in a preset map based on the first image is detected, controlling the positioning device to reposition in the identification area; finally, when the repositioning is successful in the identification area, the repositioned position is determined as the current position of the robot.
According to the scheme, when the positioning fails due to the fact that the actual environment is changed, the map is not directly reconstructed, whether the robot is located at the preset stake point is judged first, when the robot is not located at the preset stake point, repositioning is conducted, if the repositioning fails, a user is prompted to select an identification area so as to reduce the detection range, and repositioning is conducted again in the identification area, so that the probability of successful positioning based on the existing map can be improved, the times of reconstructing the map is reduced, the frequency of reconstructing the map is reduced, and manpower and material resources are saved.
Of course, it is not necessary for any one product or method of practicing the application to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a robot positioning method according to an embodiment of the present application;
FIG. 2 is a flowchart of a robot positioning method according to another embodiment of the present application;
FIG. 3 is a flowchart of a robot positioning method according to another embodiment of the present application;
FIG. 4 is a flowchart of a robot positioning method according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of a positioning device for a robot according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Because the map needs to be re-established when the positioning is failed due to the change of the actual environment at present, the frequency of reconstructing the map is high, and manpower and material resources are wasted.
The following will describe a robot positioning method according to an embodiment of the present application in detail with reference to a specific embodiment, as shown in fig. 1, and the specific steps are as follows:
s101, when the robot is not located at a preset pile point, controlling the positioning device to reposition.
In the embodiment of the application, the preset stake points refer to coordinate points where some specific objects marked when the robot builds a map for the first time, for example, the position where the charging stake is located can be set as stake points. When the robot is started, whether the robot is located at a preset pile point or not is firstly judged, and when the robot is connected with the preset pile point to perform operations such as charging, the preset pile point can send a signal connected with the robot. If the signal is not received, the robot is determined not to be positioned at the preset pile point, and when the robot is not positioned at the preset pile point, the positioning device is controlled to detect and reposition the surrounding environment.
S102, when repositioning fails, controlling the acquisition device to acquire a first image of the surrounding environment, and displaying the first image.
In the embodiment of the application, the acquisition device may, but is not limited to, a shooting device carried by the robot, such as a camera. If repositioning by the positioning device fails, the acquisition device is controlled to acquire a first image of the surrounding environment and display the first image to a user, typically an operation and maintenance person of the robot, in a manner that the first image is sent to a terminal display device in communication with the acquisition device, such as a mobile phone or a computer, but not limited to.
S103, controlling the positioning device to reposition in the identification area when detecting the selection operation of selecting the identification area in a preset map based on the first image.
In the embodiment of the application, the preset map refers to a three-dimensional map, such as a point cloud map, which is built according to the actual environment of the robot. After the user receives the first image, the user can manually select the identification area in the map based on the first image so as to reduce the detection range of the positioning device and improve the probability of successful positioning. And if the selection operation of selecting the identification area in the preset map based on the first image is detected, controlling the positioning device to reposition in the identification area selected by the user.
For example, the area a includes a plurality of sub-areas such as A1 and A2, wherein only the sub-area A1 has a new obstacle, and when the robot moves to the sub-area A2, the detection device recognizes the entire area a, and finds that the area a is different from the mark when the map is built, and the positioning fails. At the moment, a first image is sent to the user, after the user determines that the A2 subarea is the identification area based on the first image, the positioning device only detects based on the A2 subarea, and because no new obstacle is added in the A2 subarea and the mark is the same as that of the mark when the map is built, the position of the robot can be determined to be a certain position in the A2 subarea, and the positioning is successful.
And S104, when the repositioning is successful in the identification area, determining the repositioned position as the current position of the robot.
In the embodiment of the application, when the repositioning is successful in the identification area, the repositioning position is determined as the current position of the robot, so that the subsequent operation is convenient.
In the embodiment of the application, when the positioning fails due to the change of the actual environment, the map is not directly reconstructed, and whether the robot is positioned at the preset stake point is firstly judged, when the robot is not positioned at the preset stake point, the repositioning is carried out, and when the repositioning fails, the user is prompted to select the identification area so as to reduce the detection range, and the repositioning is carried out again in the identification area, so that the successful positioning times can be improved, the map reconstruction times can be reduced, the map reconstruction frequency can be reduced, and the manpower and material resources can be saved.
In yet another embodiment of the present application, the step S103 may include the steps of:
s201, acquiring first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position.
In the embodiment of the present application, point cloud data (point cloud data) refers to a set of vectors in a three-dimensional coordinate system, and position information can be determined through the point cloud data. The first obstacle is identifying all items or terrain in the area that obstruct the robot's movement. The positioning can thus be performed by acquiring first point cloud data of the first obstacle in the identification area of the current position detection by the positioning means.
S202, determining whether a position corresponding to the first point cloud data exists in the preset map.
In the embodiment of the application, when a map is established, point cloud data corresponding to each position is recorded. Therefore, after the first point cloud data is measured, whether a position corresponding to the first point cloud data exists or not can be determined in a preset map, so that repositioning is performed.
And S203, if the position corresponding to the first point cloud data exists, determining that the repositioning is successful.
In the embodiment of the application, if the position corresponding to the first point cloud data exists, the position is the current position of the robot, so that the successful repositioning can be determined.
In the embodiment of the application, the positioning device is utilized to reposition the first point cloud data of the first obstacle in the identification area detected by the current position, the user does not need to arrive at the site to assist, and the positioning process is simple and efficient.
In yet another embodiment of the present application, the method may further comprise the steps of:
and S301, if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by using the first point cloud data and the first image.
In the embodiment of the application, if the position corresponding to the first point cloud data does not exist, the scene of the identification area is changed, that is, the obstacle in the identification area is changed, at this time, the first point cloud data and the first image can be input into a preset bottom layer algorithm, and at least one candidate position is determined.
S302, displaying the at least one candidate position.
In the embodiment of the application, at least one candidate position can be sent to a preset terminal or a preset server for display, wherein the at least one candidate position can be displayed in a list form, so that the user can conveniently view the candidate position.
S303, if a selection operation of selecting a target position based on the at least one candidate position is detected, determining the target position as the current position of the robot.
In the embodiment of the application, after the user views the candidate positions, one of the candidate positions can be manually selected as the target position, and if a selection operation of selecting the target position based on at least one candidate position is detected, the target position is determined as the current position of the robot, so that positioning is completed.
In the embodiment of the application, when the robot cannot position by itself, the user can finish the positioning by selecting the target position from the candidate positions displayed by the terminal as the current position of the robot, and the robot does not need to arrive at the site for assistance, so that the positioning process is simple and efficient.
In yet another embodiment of the present application, the method may further comprise the steps of:
s401, if repositioning fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset stake point.
In the embodiment of the application, if the repositioning fails in the identification area, a first prompt message can be sent to the terminal to prompt the user to move the robot to the preset stake point, and the user can remotely control the movement of the robot and can also power down the robot motor and then move the robot.
S402, after the robot is moved to a preset pile point, acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position.
In the embodiment of the application, after the robot is moved to the preset stake point, the preset stake point can return a signal to the terminal. And acquiring a signal returned by the preset pile point, and acquiring second point cloud data of a second obstacle detected by the positioning device at the current position.
S403, repositioning is carried out based on the signal and the second point cloud data.
In the embodiment of the application, after the signal returned by the preset pile point and the second point cloud data are acquired, repositioning can be performed based on the signal and the second point cloud data.
In the embodiment of the application, when repositioning in the identification area fails, a map is not directly reconstructed, and a first prompt message is sent to a terminal to prompt a user to move the robot to a preset pile point, and after the robot is moved to the preset pile point, a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position are acquired; and finally, repositioning based on the signal and the second point cloud data. Therefore, the probability of successful positioning can be improved, the number of times of map reconstruction is reduced, and manpower and material resources are saved.
In yet another embodiment of the present application, the step S403 may include the steps of:
step one, judging whether the robot is connected with the preset pile point or not based on the signals.
In the embodiment of the application, whether the robot is connected with the preset pile point can be judged based on the signal returned by the preset pile point, and whether the robot is positioned at the preset pile point can be determined by judging whether the robot is connected with the preset pile point.
And step two, if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In the embodiment of the application, if the robot is connected with the preset pile point, it is indicated that the robot is located at the preset pile point, and at this time, it is required to determine whether the second point cloud data is consistent with the point cloud data corresponding to the position where the preset pile point is located. Whether the scene corresponding to the position of the preset pile point changes or not can be determined by judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
And step three, if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, the repositioning is successful.
In the embodiment of the application, if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, which indicates that the scene corresponding to the position of the preset pile point is unchanged, the repositioning is determined to be successful.
In the embodiment of the application, whether the robot is connected with the preset pile point can be judged based on the signal; if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point; and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, successful repositioning is achieved. Through the scheme, the probability of successful positioning can be improved, the times of map reconstruction can be reduced, and manpower and material resources are saved.
In yet another embodiment of the present application, the method may further comprise the steps of:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to recreate the map.
In the embodiment of the application, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, the scene corresponding to the position of the preset pile point is changed, and the next working of the robot may be affected due to the change of the scene corresponding to the position of the preset pile point. At the moment, a second prompt message can be sent to the terminal to prompt the user to recreate the map. According to the scheme, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, the user is prompted to recreate the map, and the influence on the follow-up work of the robot due to the fact that the scene corresponding to the position of the preset pile point changes can be avoided.
In yet another embodiment of the present application, the method may further comprise the steps of:
step one, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to give a forced identification instruction.
In the embodiment of the application, if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, the scene corresponding to the position of the preset pile point is changed, and at the moment, third prompt information is sent to the terminal to prompt the user to issue a forced identification instruction, wherein the forced identification instruction is used for indicating that the position of the preset pile point is determined as the position of the robot.
And step two, receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the indication of the forced identification instruction.
In the embodiment of the application, after the user gives the forced identification instruction, the forced identification instruction is received, and the position of the preset pile point is determined as the position of the robot according to the instruction of the forced identification instruction, so that the repositioning is finished, the times of reconstructing the map are reduced, and the manpower and material resources are saved.
In the embodiment of the application, when the positioning fails due to the change of the actual environment, the map is not directly reconstructed, and whether the robot is positioned at the preset stake point is firstly judged, when the robot is not positioned at the preset stake point, the repositioning is carried out, and when the repositioning fails, the user is prompted to select the identification area so as to reduce the detection range, and the repositioning is carried out again in the identification area, so that the successful positioning times can be improved, the map reconstruction times can be reduced, the map reconstruction frequency can be reduced, and the manpower and material resources can be saved.
Based on the same technical concept, the embodiment of the application also provides a robot positioning device, as shown in fig. 5, which comprises:
the first control module 501 is configured to control the positioning device to reposition when the robot is not located at a preset pile point;
the second control module 502 is configured to control the acquisition device to acquire and display a first image of the surrounding environment when the repositioning fails;
a third control module 503, configured to control the positioning device to reposition in the identification area when detecting a selection operation of selecting the identification area in a preset map based on the first image;
a first determining module 504 is configured to determine the relocated position as the current position of the robot when the relocation within the identification area is successful.
In one possible embodiment, the third control module includes:
an acquisition unit, configured to acquire first point cloud data of a first obstacle in the identification area detected by the positioning device at a current position;
a first determining unit, configured to determine, in the preset map, whether a location corresponding to the first point cloud data exists;
and the second determining unit is used for determining that the repositioning is successful if the position corresponding to the first point cloud data exists.
In one possible embodiment, the apparatus further comprises:
a second determining module, configured to determine, if there is no location corresponding to the first point cloud data, at least one candidate location using the first point cloud data and the first image;
a display module for displaying the at least one candidate location;
and a third determining module, configured to determine, if a selection operation for selecting a target position based on the at least one candidate position is detected, the target position as a current position of the robot.
In one possible embodiment, the apparatus further comprises:
the first sending module is used for sending first prompt information if repositioning fails in the identification area, wherein the first prompt information is used for prompting a user to move the robot to a preset stake point;
the acquisition module is used for acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position after the robot is moved to the preset pile point;
and the repositioning module is used for repositioning based on the signal and the second point cloud data.
In one possible embodiment, the relocation module comprises:
the first judging unit is used for judging whether the robot is connected with the preset pile point or not based on the signals;
the second judging unit is used for judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point if the robot is connected with the preset pile point;
and the third determining unit is used for successfully repositioning if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point.
In one possible embodiment, the apparatus further comprises:
the second sending module is configured to send second prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position where the preset pile point is located, where the second prompt information is used to prompt the user to recreate the map.
In one possible embodiment, the apparatus further comprises:
the third sending module is used for sending third prompt information if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, and the third prompt information is used for prompting the user to give a forced identification instruction;
and the receiving module is used for receiving the forced identification instruction and determining the position of the preset pile point as the position of the robot according to the indication of the forced identification instruction.
In the embodiment of the application, when the positioning fails due to the change of the actual environment, the map is not directly reconstructed, and whether the robot is positioned at the preset stake point is firstly judged, when the robot is not positioned at the preset stake point, the repositioning is carried out, and when the repositioning fails, the user is prompted to select the identification area so as to reduce the detection range, and the repositioning is carried out again in the identification area, so that the successful positioning times can be improved, the map reconstruction times can be reduced, the map reconstruction frequency can be reduced, and the manpower and material resources can be saved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 perform communication with each other through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to execute the program stored in the memory 603, and implement the following steps:
when the robot is not positioned at the preset pile point, the positioning device is controlled to reposition;
when the repositioning fails, controlling the acquisition device to acquire a first image of the surrounding environment and displaying the first image;
when detecting a selection operation of selecting an identification area in a preset map based on the first image, controlling the positioning device to reposition in the identification area;
when the repositioning is successful within the identification area, the repositioned position is determined as the current position of the robot.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program which when executed by a processor implements the steps of any of the robot positioning methods described above.
In a further embodiment of the present application, a computer program product comprising instructions which, when run on a computer, causes the computer to perform the robot positioning method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A robot positioning method, comprising:
when the robot is not positioned at the preset pile point, the positioning device is controlled to reposition;
when the repositioning fails, controlling the acquisition device to acquire a first image of the surrounding environment and displaying the first image;
when detecting a selection operation of selecting an identification area in a preset map based on the first image, controlling the positioning device to reposition in the identification area;
when the repositioning is successful within the identification area, the repositioned position is determined as the current position of the robot.
2. The method of claim 1, wherein the controlling the positioning device to reposition within the identification area comprises:
acquiring first point cloud data of a first obstacle in the identification area detected by the positioning device at the current position;
determining whether a position corresponding to the first point cloud data exists in the preset map;
and if the position corresponding to the first point cloud data exists, determining that repositioning is successful.
3. The method according to claim 2, wherein the method further comprises:
if the position corresponding to the first point cloud data does not exist, determining at least one candidate position by utilizing the first point cloud data and a first image;
displaying the at least one candidate location;
and if a selection operation of selecting a target position based on the at least one candidate position is detected, determining the target position as the current position of the robot.
4. The method according to claim 1, wherein the method further comprises:
if repositioning fails in the identification area, sending first prompt information, wherein the first prompt information is used for prompting a user to move the robot to a preset stake point;
after the robot is moved to a preset pile point, acquiring a signal returned by the preset pile point and second point cloud data of a second obstacle detected by the positioning device at the current position;
repositioning is performed based on the signal and the second point cloud data.
5. The method of claim 4, wherein the repositioning based on the signal and the second point cloud data comprises:
judging whether the robot is connected with the preset pile point or not based on the signal;
if the robot is connected with the preset pile point, judging whether the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point;
and if the second point cloud data is consistent with the point cloud data corresponding to the position of the preset pile point, successful repositioning is achieved.
6. The method of claim 5, wherein the method further comprises:
and if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending second prompt information, wherein the second prompt information is used for prompting the user to recreate the map.
7. The method of claim 5, wherein the method further comprises:
if the second point cloud data is inconsistent with the point cloud data corresponding to the position of the preset pile point, sending third prompt information, wherein the third prompt information is used for prompting the user to give a forced identification instruction;
and receiving the forced identification instruction, and determining the position of the preset pile point as the position of the robot according to the indication of the forced identification instruction.
8. A robotic positioning device, the device comprising:
the first control module is used for controlling the positioning device to reposition when the robot is not positioned at a preset pile point;
the second control module is used for controlling the acquisition device to acquire a first image of the surrounding environment and display the first image when the repositioning fails;
the third control module is used for controlling the positioning device to reposition in the identification area when detecting the selection operation of selecting the identification area in a preset map based on the first image;
and the determining module is used for determining the relocated position as the current position of the robot when the relocation in the identification area is successful.
9. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-7.
CN202110396871.XA 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium Active CN113095227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110396871.XA CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110396871.XA CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113095227A CN113095227A (en) 2021-07-09
CN113095227B true CN113095227B (en) 2023-11-07

Family

ID=76677006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110396871.XA Active CN113095227B (en) 2021-04-13 2021-04-13 Robot positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113095227B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113478488B (en) * 2021-07-14 2023-07-07 上海擎朗智能科技有限公司 Robot repositioning method, apparatus, electronic device and storage medium
CN114012725B (en) * 2021-11-05 2023-08-08 深圳拓邦股份有限公司 Robot repositioning method, system, robot and storage medium
CN116965745A (en) * 2022-04-22 2023-10-31 追觅创新科技(苏州)有限公司 Coordinate repositioning method and system and cleaning robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090088516A (en) * 2008-02-15 2009-08-20 한국과학기술연구원 Method for self-localization of a robot based on object recognition and environment information around the recognized object
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106323273A (en) * 2016-08-26 2017-01-11 深圳微服机器人科技有限公司 Robot relocation method and device
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
CN109993794A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot method for relocating, device, control equipment and storage medium
CN109986561A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot long-distance control method, device and storage medium
CN111578946A (en) * 2020-05-27 2020-08-25 杭州蓝芯科技有限公司 Laser navigation AGV repositioning method and device based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100912874B1 (en) * 2007-06-28 2009-08-19 삼성전자주식회사 Method and apparatus for relocating a mobile robot
CN110307838B (en) * 2019-08-26 2019-12-10 深圳市优必选科技股份有限公司 Robot repositioning method and device, computer-readable storage medium and robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090088516A (en) * 2008-02-15 2009-08-20 한국과학기술연구원 Method for self-localization of a robot based on object recognition and environment information around the recognized object
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
CN106092104A (en) * 2016-08-26 2016-11-09 深圳微服机器人科技有限公司 The method for relocating of a kind of Indoor Robot and device
CN106323273A (en) * 2016-08-26 2017-01-11 深圳微服机器人科技有限公司 Robot relocation method and device
CN106813672A (en) * 2017-01-22 2017-06-09 深圳悉罗机器人有限公司 The air navigation aid and mobile robot of mobile robot
CN109993794A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot method for relocating, device, control equipment and storage medium
CN109986561A (en) * 2019-03-29 2019-07-09 北京猎户星空科技有限公司 A kind of robot long-distance control method, device and storage medium
CN111578946A (en) * 2020-05-27 2020-08-25 杭州蓝芯科技有限公司 Laser navigation AGV repositioning method and device based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于点云地图的机器人室内实时重定位方法;马跃龙;曹雪峰;陈丁;李登峰;蒋秉川;;系统仿真学报(第S1期);15-23 *

Also Published As

Publication number Publication date
CN113095227A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN113095227B (en) Robot positioning method and device, electronic equipment and storage medium
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
JP6505939B1 (en) Method of identifying charging stand, device, robot, and computer readable storage medium
EP4039533B1 (en) Method and system for charging electric vehicle, and storage medium
CN110900602B (en) Positioning recovery method and device, robot and storage medium
CN108931246B (en) Method and device for detecting existence probability of obstacle at unknown position
CN108932515B (en) Method and device for correcting position of topological node based on closed loop detection
CN107295477A (en) A kind of localization method and mobile terminal
US20240069550A1 (en) Method for processing abnormality of material pushing robot, device, server, and storage medium
CN113741446B (en) Robot autonomous exploration method, terminal equipment and storage medium
CN103763731A (en) Positioning detection method and device
CN107786869B (en) Method, device and storage medium for generating menu path of television equipment
CN111260759B (en) Path determination method and device
WO2024007807A1 (en) Error correction method and apparatus, and mobile device
CN113440054B (en) Method and device for determining range of charging base of sweeping robot
CN114355903A (en) Robot automatic charging method and device, computer equipment and storage medium
CN109788431B (en) Bluetooth positioning method, device, equipment and system based on adjacent node group
CN111724621A (en) Vehicle searching system, method, computer readable storage medium and client
CN113190380A (en) Equipment relocation error recovery method and device, computer equipment and storage medium
CN103926610A (en) Equipment position information recording method and device
CN110971753A (en) Positioning method and device based on browser, mobile terminal and storage medium
CN110399439A (en) Point of interest labeling method, equipment and storage medium
TWI774140B (en) Two-way signal positioning method and two-way signal positioning system thereof
CN105739904A (en) Method for rapidly opening navigation application and intelligent wristband
CN115509525A (en) View construction method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant