CN111795701B - Remote recovery positioning method and device for robot - Google Patents
Remote recovery positioning method and device for robot Download PDFInfo
- Publication number
- CN111795701B CN111795701B CN202010657391.XA CN202010657391A CN111795701B CN 111795701 B CN111795701 B CN 111795701B CN 202010657391 A CN202010657391 A CN 202010657391A CN 111795701 B CN111795701 B CN 111795701B
- Authority
- CN
- China
- Prior art keywords
- robot
- information
- positioning
- module
- recovery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000011084 recovery Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000007 visual effect Effects 0.000 claims abstract description 48
- 238000004891 communication Methods 0.000 claims abstract description 17
- 230000001133 acceleration Effects 0.000 claims description 10
- 230000001276 controlling effect Effects 0.000 claims description 10
- 230000001105 regulatory effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000004659 sterilization and disinfection Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0219—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a remote recovery positioning method for a robot, which comprises the following steps: establishing remote network communication connection with the robot; reading and analyzing the configuration information of the robot; the method comprises the steps of calling map data and floor data acquired by a robot to generate visual positioning information of the robot system; combining sensor information and map information of the robot to generate current visual environment information of the robot; carrying out graph matching on the visual positioning information and the visual environment information; if the matching degree does not exceed a preset matching threshold, the robot is considered to be in a lost positioning state; and when the robot is in a lost positioning state, positioning and recovering the robot. The invention judges whether the robot is in a lost positioning state or not by comparing the visual positioning information with the visual environment information, and determines whether to carry out positioning recovery on the robot or not according to the judgment result. Through the operation mode of remote recovery, recovery efficiency is improved, and labor cost is reduced.
Description
[ technical field ] A
The invention relates to the technical field of robots, in particular to a method and a device for remotely restoring and positioning a robot.
[ background of the invention ]
During the actual operation of the robot in the station, situations such as missing positioning, jamming, and failure to reach the destination may occur. At this time, because the positioning system is in error or is forced to displace, the positioning information of the robot cannot be matched with the pre-stored map information, or the positioning coordinates of the robot are in an inaccessible area in the map, so that the robot cannot normally work, and therefore the robot needs to be repositioned. However, the existing solution is to move the robot to a normal working area manually or adopt a field resetting and positioning manner to restore the robot to normal action capability. Therefore, the existing method for solving the problem that the robot is lost or stuck wastes time and labor and has low efficiency.
In view of the above, it is desirable to provide a method and an apparatus for remotely restoring a robot to a position.
[ summary of the invention ]
The invention aims to provide a method and a device for remotely restoring and positioning a robot, which aim to solve the problems of time and labor waste and low efficiency of the existing method for solving the problem that the robot is lost and positioned or stuck, improve the efficiency and save the cost through remote operation and restoration.
In order to achieve the above object, an aspect of the present invention provides a method for remotely restoring and positioning a robot, including the following steps:
establishing remote network communication connection with the robot;
reading and analyzing the configuration information of the robot; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information;
calling map data and floor data acquired by a robot, and generating visual positioning information for the robot in the robot system according to the map information and the positioning information of the robot at present;
combining the sensor information and the map information of the robot to generate the current visual environment information of the robot;
carrying out graph matching on the visual positioning information and the visual environment information; if the matching degree does not exceed a preset matching threshold value, the robot is considered to be in a lost positioning state;
and when the robot is in the lost positioning state, positioning and recovering the robot.
In a preferred embodiment, when the robot is in the lost position state, the positioning and recovering step for the robot includes:
receiving real-time state information obtained by the robot through a sensor;
sending and receiving a regulating instruction of the speed, the angle, the distance and the acceleration of the machine to the robot according to the real-time state information;
and controlling the robot to move according to the adjusting instruction until the robot is separated from the current environment.
In a preferred embodiment, when the robot is in the lost position state, the positioning and recovering step for the robot includes:
sending the coordinates of a specific point location to the robot;
and controlling the robot to reposition according to the coordinates of the specific point location until the normal positioning state is recovered.
In a preferred embodiment, the step of generating the current visualization environment of the robot by combining the sensor information and the map information of the robot comprises:
calling task chain execution progress information, walking track information and sensor data of the robot;
determining the building and floor positions of the robot by combining the task chain execution progress information and the walking track information;
and determining the position of the robot on the floor by combining the sensor data and the map information to obtain the current visual environment information of the robot.
In a preferred embodiment, the step of retrieving task chain execution progress information, walking track information and sensor data of the robot further includes:
calling a navigation program record of the robot;
and analyzing the navigation program record to obtain the walking track information of the robot.
Another aspect of the present invention provides a remote recovery positioning device for a robot, including: the communication module is used for establishing remote network communication connection with the robot; the reading module is used for reading and analyzing the configuration information of the robot; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information; the first generation module is used for calling map data and floor data acquired by the robot and generating visual positioning information for the robot system according to the map information and the positioning information of the robot at present; the second generation module is used for generating the current visual environment information of the robot by combining the sensor information of the robot and the map information; the matching module is used for carrying out graph matching on the visual positioning information and the visual environment information; if the matching degree does not exceed a preset matching threshold value, the robot is considered to be in a lost positioning state; and the positioning recovery module is used for positioning and recovering the robot when the robot is in the lost positioning state.
In a preferred embodiment, the location retrieval module includes: the receiving module is used for receiving the real-time state information obtained by the robot through the sensor; the instruction sending module is used for sending and receiving adjusting instructions of the speed, the angle, the distance and the acceleration of the robot to the robot according to the real-time state information; and the movement adjusting module is used for controlling the robot to move according to the adjusting instruction until the robot is separated from the current environment.
In a preferred embodiment, the location retrieval module includes: the coordinate sending module is used for sending the coordinates of a specific point location to the robot; and the positioning resetting module is used for controlling the robot to perform repositioning according to the coordinates of the specific point location until the normal positioning state is recovered.
In a preferred embodiment, the second generating module includes: the calling module is used for calling task chain execution progress information, walking track information and sensor data of the robot; the first determining module is used for determining the building and floor positions of the robot by combining the task chain execution progress information and the walking track information; and the second determination module is used for determining the position of the robot on the floor by combining the sensor data and the map information to obtain the current visual environment information of the robot.
In a preferred embodiment, the retrieving module further comprises: the navigation calling unit is used for calling the navigation program record of the robot; and the navigation analysis unit is used for analyzing the navigation program record to obtain the walking track information of the robot.
According to the invention, the remote communication connection is established with the robot, the configuration information of the robot is directly read, the visual positioning information of the position where the robot thinks the robot is located is obtained according to the positioning information and the map information of the robot, the visual environment information of the current position of the robot is obtained according to the sensor information and the map information of the robot, whether the robot is in a lost positioning state or not is judged by comparing the visual positioning information with the visual environment information, and whether the robot is positioned and recovered or not is determined according to the judgment result. Through the operation mode of remote recovery, recovery efficiency is improved, and labor cost is reduced.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flow chart of a method for remotely restoring a position of a robot according to the present invention;
FIG. 2 is a flowchart illustrating sub-steps of step S104 of the method for remotely retrieving the robot from the robot shown in FIG. 1;
FIG. 3 is a flowchart illustrating sub-steps of step S106 of the method for remotely retrieving a position of a robot according to one embodiment shown in FIG. 1;
FIG. 4 is a flowchart illustrating sub-steps of step S106 of the method for remotely retrieving the position of the robot according to another embodiment shown in FIG. 1;
FIG. 5 is a block diagram of a robotic remote recovery positioning device provided in accordance with the present invention;
FIG. 6 is a block diagram of a position recovery module of the robotic remote recovery positioning device in one embodiment shown in FIG. 5;
FIG. 7 is a block diagram of a position recovery module of the robotic remote recovery positioning device of another embodiment shown in FIG. 5;
fig. 8 is a block diagram of a second generation module in the robotic remote recovery positioning device of fig. 5.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantageous technical effects of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a remote recovery positioning method for a robot, which is used for carrying out remote state monitoring on the robot which cannot normally work in site operation and carrying out positioning recovery so as to ensure that the robot can normally work.
As shown in FIG. 1, in an embodiment of the method, steps S101-S106 are included.
In step S101, a remote network communication connection with the robot is established.
In this step, the worker operates the robot remote recovery positioning device 100 (referred to as a device for short) to establish remote connection with the robot, so that the worker can remotely read and retrieve the database stored in the robot through the device, and remotely control the robot. Specifically, the device establishes a remote connection with the robot through a websocket (a protocol for performing full-duplex communication on a single TCP connection) technology, so that server resources and bandwidth can be better saved, and communication can be performed in real time.
In step S102, the configuration information of the robot is read and analyzed; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information.
Specifically, after the remote connection with the robot is established, state data (such as electric quantity, network, position, camera and the like) in the robot is pushed in real time, configuration information of the robot is read, and the physical environment and the configuration environment where the robot is currently located are preliminarily obtained according to analysis of the configuration information; for example, the approximate orientation of the robot is determined by laser profile, image data, and the like of the environment in which the robot is currently located.
In step S103, map data and floor data acquired by the robot are retrieved, and visual positioning information for the robot in the robot system is generated according to the map information and the positioning information where the robot is currently located.
In this step, map information pre-stored in the system of the robot, and data related to the building and the floor in the station configuration and the floor configuration are retrieved. And generating visual positioning information with a floor space outline on the map according to the coordinates of the last positioning point in the positioning information of the robot, wherein the visual positioning information is the position where the robot thinks the robot is located. The technology for visualizing the positioning location in the map refers to the prior art, and the invention is not limited herein.
In step S104, the current visual environment information of the robot is generated by combining the sensor information of the robot and the map information.
Since the device establishes remote real-time communication with the robot, a sensor (such as a camera, a laser range finder, an infrared sensor, etc.) which can control the robot generates real-time image information, the image information and the map information are used as mutual comparison, so as to obtain the real position of the robot in the map, and then the position is taken as the current reference point, so as to generate visual environment information with a space outline on the map. For example, laser emitted by a laser sensor of a robot is reflected when hitting an obstacle (such as a wall) and then received by the laser sensor, the distance from the laser sensor to the obstacle can be obtained by detecting the time interval from the emission to the reception of the laser, and laser lines with different lengths are formed when the laser is transmitted to the device; then, the laser contour lines can be formed by combining the emission angles of the laser lines and the laser lines with different lengths, and the environment position of the robot can be known by matching the laser contour lines with map information. The method for obtaining the spatial profile by using the laser can refer to the existing laser mapping method, and the invention is not limited herein.
Specifically, as shown in fig. 2, step S104 includes the following sub-steps S1041-S1043.
In step S1041, task chain execution progress information, walking trajectory information, and sensor data of the robot are retrieved. Checking which step the task is executed to by calling the task execution progress of the robot; and then, calling the walking track information of the robot, and checking the position point of the robot at the last position in the walking track. Meanwhile, sensor data records stored by the robot are called, and the current real-time sensing state of each sensor is controlled and checked.
In step S1042, the building and floor position of the robot is determined by combining the task chain execution progress information and the walking trajectory information. For example, a task when the task chain is interrupted is to perform a disinfection operation on a certain building and a certain floor, and at this time, it can be basically determined that the robot is in the building and the floor. The walking track information of the robot further confirms a certain approximate position of which floor the robot is located.
In step S1043, the position of the robot on the floor is determined by combining the sensor data and the map information, and the current visual environment information of the robot is obtained. And checking and obtaining current sensing data through various sensors of the remote control robot. For example, the environmental information of the robot in the current environment is obtained through an image acquisition sensor such as a camera and a laser sensor, and then the environmental information is matched with map information to obtain map-based visualized environmental information. Further, the position of the current coordinate of the robot under the route map is detected, and if the robot is located in an unreachable area (usually represented as a gray-black area) of the route on the map at the moment, the positioning can be considered to be lost.
Further, in step S1041, the method further includes the steps of:
and calling a navigation program record of the robot. It can be understood that, when the robot is displaced abnormally, for example, forced displacement by the robot, the specific position or floor where the robot is located cannot be determined according to the camera data of the robot, so that the change of direction and speed generated in the displacement of the robot can be analyzed by reading the navigation program record stored in the robot, so as to simulate the historical walking track of the robot.
And analyzing the navigation program record to obtain the walking track information of the robot. Specifically, the travel track information of the robot may be obtained by importing the end point in the history travel track obtained by the simulation into the map information. Therefore, the trace query of the robot under abnormal displacement is realized.
In step S105, performing graph matching on the visual positioning information and the visual environment information, and determining whether a matching value of the two exceeds a preset matching threshold; and if the matching degree does not exceed the preset matching threshold, the robot is considered to be in a lost positioning state.
In the step, the visual positioning information (namely the position where the robot system considers to be located) established based on the same map information is subjected to graph matching with the visual environment information (namely the real-time position where the robot is located currently), a matching value is obtained, whether the matching value exceeds a preset matching threshold value is judged, if the judging result is that the matching value exceeds the preset matching threshold value, the positioning system of the robot can be considered to be in a normal state, the robot is in an abnormal running state currently and may be jammed by terrain factors or other reasons, and the robot can be separated from the jammed state through the movement of a remote operation robot or moved to a normal area by adopting a manual carrying means; if the judgment result is that the current position of the robot is not beyond the preset range, the robot is considered to be in a state of being incapable of normal positioning, namely a lost positioning state.
In the embodiment, the features (such as points, lines, and planes) of the two images are extracted respectively, the features are described in parameters, and the described parameters are used for matching. The processed image typically contains features such as color features, texture features, shape features, spatial location features, and the like. The feature matching firstly preprocesses the images to extract high-level features of the images, and then establishes a matching corresponding relation of the features between the two images, wherein commonly used feature elements comprise point features, edge features and region features. And finally, obtaining a value of the feature similarity of the pattern matching, namely the matching value.
In step S106, when the robot is in the lost positioning state, positioning recovery is performed on the robot.
In this step, when it is determined that the cause of the abnormal operation of the robot is the lost positioning state, the robot is first attempted to be remotely restored, for example, by resetting the positioning system of the robot or by remote control.
Specifically, in one embodiment, when the positioning coordinates of the robot are in an inaccessible area in a map, the robot cannot move because the robot is uncertain of the current environment in which the robot is located, and at this time, the robot needs to be helped to leave the current environment. As shown in fig. 3, step S106 includes steps S1061, S1062, and S1063.
In step S1061, real-time status information obtained by the robot through the sensor is received. For example, a sensor of the robot, such as a camera, is turned on through remote operation, and a picture shot by the camera and the real-time state of other sensors are acquired.
In step S1062, an adjustment command for receiving the speed, angle, distance, and acceleration of the robot is transmitted to the robot based on the real-time status information. And the background device receives an operation instruction of a user and remotely transmits the operation instruction to the robot. The robot can move correspondingly according to the speed, the angle, the distance and the acceleration of the robot. For example, the background device judges that the robot is currently in a narrow curve through a camera of the robot, analyzes a path required by the robot to be separated from the current environment, and then decomposes a motion posture required by running according to the path into instructions such as machine speed, angle, distance, acceleration and the like.
In step S1063, the robot is controlled to move according to the adjustment command until the current environment is separated. And after receiving the adjusting instruction, the robot performs corresponding movement according to the adjusting instruction containing the speed, angle, distance and acceleration information of the robot until the robot is separated from the current environment. It can be understood that information such as machine speed, angle, distance, acceleration and the like in the adjusting instruction sent to the robot is changed at any time according to the current motion state of the robot, so that the robot can continuously adjust the motion attitude in various narrow terrains until the robot runs into a normal working area, or the positioning coordinate of the robot reaches the reachable area of a map. Therefore, the current positioning coordinates of the robot are located in the reachable area of the map by moving the robot, so that the robot can clearly determine the environment where the robot is located and restore the normal motion capability.
In another embodiment, when the robot cannot be located because there is no reference point or because the position is lost, the robot needs to be provided with a reference point for auxiliary location. As shown in fig. 4, step S106 includes steps S1064 and S1065.
In step S1064, the coordinates of a specific point location are sent to the robot. The coordinates are located in a reachable area in the map, which may be an obvious subject.
In step S1065, the robot is controlled to reposition according to the coordinates of the specific point location until the normal positioning state is recovered. The robot can automatically move to the position according to the coordinates of the specific point location and then reposition in the positioning system; the robot can be controlled to move to the point position through a background device. Therefore, the robot is positioned by referring to the specific point position coordinates provided for the robot, and the normal operation capability of the robot is recovered.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In summary, the present invention establishes a remote communication connection with the robot, directly reads configuration information of the robot, obtains visual positioning information of a position where the robot thinks the robot is located according to positioning information and map information of the robot, obtains visual environment information of the current position of the robot according to sensor information and map information of the robot, judges whether the robot is in a lost positioning state by comparing the visual positioning information and the visual environment information, and determines whether to perform positioning recovery on the robot according to a judgment result. Through the operation mode of remote recovery, recovery efficiency is improved, and labor cost is reduced.
Another aspect of the present invention is to provide a robot remote recovery positioning apparatus 100 for establishing a remote connection with a robot and remotely operating the robot, the implementation and the principle of which are consistent with the above-mentioned robot remote recovery positioning method, and therefore, the detailed description thereof is omitted here.
In the present embodiment, as shown in fig. 5, the robot remote recovery positioning device 100 includes:
the communication module 10 is used for establishing remote network communication connection with the robot;
a reading module 20 for reading and analyzing the configuration information of the robot; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information;
the first generation module 30 is used for retrieving map data and floor data acquired by the robot, and generating visual positioning information for the robot in the robot system according to the current map information and positioning information of the robot;
the second generating module 40 is configured to combine the sensor information of the robot with the map information to generate current visual environment information of the robot;
the matching module 50 is used for performing graph matching on the visual positioning information and the visual environment information; if the matching degree exceeds the matching threshold value, the robot is considered to be in a lost positioning state;
and a positioning recovery module 60, configured to perform positioning recovery on the robot when the robot is in a lost positioning state.
Further, in one embodiment, as shown in FIG. 6, the position recovery 60 module includes:
the receiving module 61 is used for receiving the real-time state information obtained by the robot through the sensor;
the instruction sending module 62 is used for sending and receiving adjusting instructions of the speed, the angle, the distance and the acceleration of the robot according to the real-time state information;
and the movement adjusting module 63 is used for controlling the robot to move according to the adjusting instruction until the robot is separated from the current environment.
Further, in another embodiment, as shown in fig. 7, the positioning recovery module 60 includes:
a coordinate sending module 64, configured to send the coordinates of a specific point location to the robot;
and the positioning resetting module 65 is used for controlling the robot to perform repositioning according to the coordinates of the specific point location until the normal positioning state is recovered.
Further, in one embodiment, as shown in fig. 8, the second generating module 40 includes:
the invoking module 41 is configured to invoke task chain execution progress information, walking track information, and sensor data of the robot;
the first determining module 42 is used for determining the building and floor positions of the robot by combining the task chain execution progress information and the walking track information;
and the second determining module 43 is used for determining the position of the robot at the floor by combining the sensor data and the map information, and obtaining the current visual environment information of the robot.
Further, the retrieving module 41 further includes: a navigation calling unit (not shown in the figure) for calling the navigation program record of the robot; and the navigation analysis unit (not shown in the figure) is used for analyzing the navigation program record to obtain the walking track information of the robot.
The invention provides a terminal, which comprises a memory, a processor and a robot remote recovery positioning program stored in the memory and capable of running on the processor, wherein when the robot remote recovery positioning program is executed by the processor, the steps of the robot remote recovery positioning method are realized.
Another aspect of the present invention is a readable storage medium storing a robot remote recovery positioning program, which when executed by a processor implements the steps of the robot remote recovery positioning method as described above.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The invention is not limited solely to that described in the specification and embodiments, and additional advantages and modifications will readily occur to those skilled in the art, so that the invention is not limited to the specific details, representative apparatus, and illustrative examples shown and described herein, without departing from the spirit and scope of the general concept as defined by the appended claims and their equivalents.
Claims (8)
1. A robot remote recovery positioning method is characterized by comprising the following steps:
establishing remote network communication connection with the robot;
reading and analyzing the configuration information of the robot; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information;
calling map data and floor data acquired by a robot, and generating visual positioning information for the robot in the robot system according to the map information and the positioning information of the robot at present;
calling task chain execution progress information, walking track information and sensor data of the robot; determining the building and floor positions of the robot by combining the task chain execution progress information and the walking track information; determining the position of the robot on the floor by combining the sensor data and the map information to obtain the current visual environment information of the robot;
carrying out graphic matching on the visual positioning information and the visual environment information; if the matching degree does not exceed a preset matching threshold value, the robot is considered to be in a lost positioning state;
and when the robot is in the lost positioning state, positioning and recovering the robot.
2. The method for remotely restoring a position of a robot according to claim 1, wherein when the robot is in the lost position state, the step of restoring the position of the robot includes:
receiving real-time state information obtained by the robot through a sensor;
sending and receiving a regulating instruction of the speed, the angle, the distance and the acceleration of the machine to the robot according to the real-time state information;
and controlling the robot to move according to the adjusting instruction until the robot is separated from the current environment.
3. The method for remotely restoring a position of a robot according to claim 1, wherein when the robot is in the lost position state, the step of restoring the position of the robot includes:
sending the coordinates of a specific point location to the robot;
and controlling the robot to reposition according to the coordinates of the specific point location until the normal positioning state is recovered.
4. The method for remotely restoring and positioning a robot according to claim 1, wherein the step of retrieving the task chain execution progress information, the walking track information and the sensor data of the robot further comprises:
calling a navigation program record of the robot;
and analyzing the navigation program record to obtain the walking track information of the robot.
5. A robot remote recovery positioning device, comprising:
the communication module is used for establishing remote network communication connection with the robot;
the reading module is used for reading and analyzing the configuration information of the robot; the configuration information comprises station configuration, point location configuration, floor configuration, map information, positioning information and sensor information;
the first generation module is used for calling map data and floor data acquired by the robot and generating visual positioning information for the robot system according to the map information and the positioning information of the robot at present;
the second generation module is used for generating the current visual environment information of the robot by combining the sensor information of the robot and the map information;
the matching module is used for carrying out graph matching on the visual positioning information and the visual environment information; if the matching degree does not exceed a preset matching threshold value, the robot is considered to be in a lost positioning state;
the positioning recovery module is used for performing positioning recovery on the robot when the robot is in the lost positioning state;
the second generation module comprises:
the calling module is used for calling task chain execution progress information, walking track information and sensor data of the robot;
the first determining module is used for determining the building and floor positions of the robot by combining the task chain execution progress information and the walking track information;
and the second determining module is used for determining the position of the robot on the floor by combining the sensor data and the map information to obtain the current visual environment information of the robot.
6. The robotic remote recovery positioning device of claim 5, wherein the positioning recovery module comprises:
the receiving module is used for receiving the real-time state information obtained by the robot through the sensor;
the instruction sending module is used for sending and receiving adjusting instructions of the speed, the angle, the distance and the acceleration of the robot to the robot according to the real-time state information;
and the motion adjusting module is used for controlling the robot to move according to the adjusting instruction until the robot is separated from the current environment.
7. The robotic remote recovery positioning device of claim 5, wherein the positioning recovery module comprises:
the coordinate sending module is used for sending the coordinates of a specific point location to the robot;
and the positioning resetting module is used for controlling the robot to perform repositioning according to the coordinates of the specific point location until the normal positioning state is recovered.
8. The robotic remote resume positioning device of claim 5, wherein the recall module further comprises:
the navigation calling unit is used for calling the navigation program record of the robot;
and the navigation analysis unit is used for analyzing the navigation program record to obtain the walking track information of the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010657391.XA CN111795701B (en) | 2020-07-09 | 2020-07-09 | Remote recovery positioning method and device for robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010657391.XA CN111795701B (en) | 2020-07-09 | 2020-07-09 | Remote recovery positioning method and device for robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111795701A CN111795701A (en) | 2020-10-20 |
CN111795701B true CN111795701B (en) | 2022-06-21 |
Family
ID=72811386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010657391.XA Active CN111795701B (en) | 2020-07-09 | 2020-07-09 | Remote recovery positioning method and device for robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111795701B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112720491A (en) * | 2020-12-28 | 2021-04-30 | 上海有个机器人有限公司 | Robot remote control method, system, storage medium and equipment |
CN112797976A (en) * | 2021-01-18 | 2021-05-14 | 上海钛米机器人股份有限公司 | Positioning detection method and device, computer equipment and readable storage medium |
NO346982B1 (en) * | 2021-06-30 | 2023-03-27 | Autostore Tech As | System, method and computer program product of determining a position of a container handling vehicle in an automated grid based storage and retrieval system |
CN114655797A (en) * | 2021-12-30 | 2022-06-24 | 深圳十一空间机器人有限公司 | Robot elevator-taking floor calculation method based on IMU |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007249735A (en) * | 2006-03-17 | 2007-09-27 | Fujitsu Ltd | Robot location controller and robot self-location restoration method |
RU2014116179A (en) * | 2014-04-22 | 2015-10-27 | Российская Федерация, от имени которой выступает Министерство Российской Федерации по делам гражданской обороны, чрезвычайным ситуациям и ликвидации последствий стихийных бедствий (МЧС России) | METHOD FOR AUTOMATIC CONTROL OF A GROUND ROBOT TECHNICAL COMPLEX |
CN105241461A (en) * | 2015-11-16 | 2016-01-13 | 曾彦平 | Map creating and positioning method of robot and robot system |
CN105873209A (en) * | 2015-12-15 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Method and apparatus for positioning mobile terminal in lost locked state |
CN106092104A (en) * | 2016-08-26 | 2016-11-09 | 深圳微服机器人科技有限公司 | The method for relocating of a kind of Indoor Robot and device |
CN106843113A (en) * | 2017-03-16 | 2017-06-13 | 中智科创机器人有限公司 | Robot dispatching method, device and system |
CN108076075A (en) * | 2016-11-07 | 2018-05-25 | 重庆雾都科技有限公司 | A kind of intelligent terminal system being remotely located with network cloud management |
CN108151760A (en) * | 2017-12-28 | 2018-06-12 | 亿嘉和科技股份有限公司 | A kind of robot localization restoration methods based on odometer |
CN108398945A (en) * | 2018-01-17 | 2018-08-14 | 上海思岚科技有限公司 | A kind of method and apparatus executing task for mobile robot |
CN109394086A (en) * | 2018-11-19 | 2019-03-01 | 珠海市微半导体有限公司 | A kind of walk on method, apparatus and chip based on trapped clean robot |
CN110146078A (en) * | 2019-06-25 | 2019-08-20 | 张收英 | A kind of robot localization method and system |
CN110986967A (en) * | 2019-12-20 | 2020-04-10 | 上海有个机器人有限公司 | Robot automatic repositioning method, medium, terminal and device |
CN111044036A (en) * | 2019-12-12 | 2020-04-21 | 浙江大学 | Remote positioning method based on particle filtering |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109506641A (en) * | 2017-09-14 | 2019-03-22 | 深圳乐动机器人有限公司 | The pose loss detection and relocation system and robot of mobile robot |
CN109986561B (en) * | 2019-03-29 | 2021-04-20 | 北京猎户星空科技有限公司 | Robot remote control method, device and storage medium |
-
2020
- 2020-07-09 CN CN202010657391.XA patent/CN111795701B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007249735A (en) * | 2006-03-17 | 2007-09-27 | Fujitsu Ltd | Robot location controller and robot self-location restoration method |
RU2014116179A (en) * | 2014-04-22 | 2015-10-27 | Российская Федерация, от имени которой выступает Министерство Российской Федерации по делам гражданской обороны, чрезвычайным ситуациям и ликвидации последствий стихийных бедствий (МЧС России) | METHOD FOR AUTOMATIC CONTROL OF A GROUND ROBOT TECHNICAL COMPLEX |
CN105241461A (en) * | 2015-11-16 | 2016-01-13 | 曾彦平 | Map creating and positioning method of robot and robot system |
CN105873209A (en) * | 2015-12-15 | 2016-08-17 | 乐视移动智能信息技术(北京)有限公司 | Method and apparatus for positioning mobile terminal in lost locked state |
CN106092104A (en) * | 2016-08-26 | 2016-11-09 | 深圳微服机器人科技有限公司 | The method for relocating of a kind of Indoor Robot and device |
CN108076075A (en) * | 2016-11-07 | 2018-05-25 | 重庆雾都科技有限公司 | A kind of intelligent terminal system being remotely located with network cloud management |
CN106843113A (en) * | 2017-03-16 | 2017-06-13 | 中智科创机器人有限公司 | Robot dispatching method, device and system |
CN108151760A (en) * | 2017-12-28 | 2018-06-12 | 亿嘉和科技股份有限公司 | A kind of robot localization restoration methods based on odometer |
CN108398945A (en) * | 2018-01-17 | 2018-08-14 | 上海思岚科技有限公司 | A kind of method and apparatus executing task for mobile robot |
CN109394086A (en) * | 2018-11-19 | 2019-03-01 | 珠海市微半导体有限公司 | A kind of walk on method, apparatus and chip based on trapped clean robot |
CN110146078A (en) * | 2019-06-25 | 2019-08-20 | 张收英 | A kind of robot localization method and system |
CN111044036A (en) * | 2019-12-12 | 2020-04-21 | 浙江大学 | Remote positioning method based on particle filtering |
CN110986967A (en) * | 2019-12-20 | 2020-04-10 | 上海有个机器人有限公司 | Robot automatic repositioning method, medium, terminal and device |
Also Published As
Publication number | Publication date |
---|---|
CN111795701A (en) | 2020-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111795701B (en) | Remote recovery positioning method and device for robot | |
CN110640730B (en) | Method and system for generating three-dimensional model for robot scene | |
US11485013B2 (en) | Map creation method of mobile robot and mobile robot | |
US7590468B2 (en) | Robot system | |
US7280687B2 (en) | Device for detecting position/orientation of object | |
Grange et al. | Effective vehicle teleoperation on the world wide web | |
US10564270B2 (en) | Methods and systems for calibrating sensors | |
US20220012494A1 (en) | Intelligent multi-visual camera system and method | |
CN111168660B (en) | Redundant degree of freedom hydraulic heavy load robot arm initiative safety system | |
WO2020220930A1 (en) | Robot-based insertion mounting of workpieces | |
JP6948164B2 (en) | Work robot arm attitude control system and method | |
CN118528260A (en) | Control method for grabbing power cabinet | |
CN113359705A (en) | Path planning method, formation cooperative operation method and equipment | |
EP3733360A1 (en) | A system for preventing collisions in a robotic cell environment | |
Santos-Victor et al. | Topological maps for visual navigation | |
US20240255956A1 (en) | Information processing device, information processing system, information processing method, and program | |
CN115457096A (en) | Auxiliary control method, device and system for working machine and working machine | |
CN117794704A (en) | Robot control device, robot control system, and robot control method | |
CN110658827B (en) | Transport vehicle automatic guiding system and method based on Internet of things | |
CN112631272A (en) | Method and equipment for remotely recovering power of robot | |
CN111413972A (en) | Robot and obstacle detection method and system thereof | |
EP4279227A1 (en) | Robot system and robot control method | |
EP4197711A1 (en) | Cooperation system | |
CN115131656B (en) | Space identification method and device, electronic equipment and computer readable storage medium | |
CN113580130B (en) | Six-axis mechanical arm obstacle avoidance control method and system and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |