CN111954188A - Robot control method, device, terminal and medium - Google Patents

Robot control method, device, terminal and medium Download PDF

Info

Publication number
CN111954188A
CN111954188A CN202010753106.4A CN202010753106A CN111954188A CN 111954188 A CN111954188 A CN 111954188A CN 202010753106 A CN202010753106 A CN 202010753106A CN 111954188 A CN111954188 A CN 111954188A
Authority
CN
China
Prior art keywords
robot
area
information
leave
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010753106.4A
Other languages
Chinese (zh)
Other versions
CN111954188B (en
Inventor
刘大志
邓有志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uditech Co Ltd
Original Assignee
Uditech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uditech Co Ltd filed Critical Uditech Co Ltd
Priority to CN202010753106.4A priority Critical patent/CN111954188B/en
Publication of CN111954188A publication Critical patent/CN111954188A/en
Application granted granted Critical
Publication of CN111954188B publication Critical patent/CN111954188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/02Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using radio waves
    • G01S1/08Systems for determining direction or position line
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/02Systems for determining distance or velocity not using reflection or reradiation using radio waves
    • G01S11/06Systems for determining distance or velocity not using reflection or reradiation using radio waves using intensity measurements

Abstract

The application is applicable to the technical field of robots and provides a robot control method, a device, a terminal and a medium, wherein the robot control method comprises the following steps: determining the relative position relationship between the terminal equipment and the robot according to the signal strength of the communication signal between the terminal equipment and the robot; acquiring first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area; and controlling the robot to leave the first area according to the first information and the relative position relation. The embodiment of the application realizes the rescue of the robot.

Description

Robot control method, device, terminal and medium
Technical Field
The present application relates to the field of robotics, and in particular, to a robot control method, apparatus, terminal, and medium.
Background
With the development of science and technology, more and more robots are used to automatically serve humans. In the process of providing services by the robot, the robot may have the situations of damaged parts, malicious hijacking and the like, so that the robot is trapped in a certain area and cannot provide services normally.
Therefore, a robot control method is needed to help the robot leave the trapped area and rescue the robot.
Disclosure of Invention
The embodiment of the application provides a robot control method, a device, a terminal and a medium, which can control a robot to leave a trapped area.
In a first aspect, an embodiment of the present application provides a robot control method, where the method includes:
determining the relative position relationship between the terminal equipment and the robot according to the signal strength of the communication signal between the terminal equipment and the robot;
acquiring first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area;
and controlling the robot to leave the first area according to the first information and the relative position relation.
In a possible implementation manner of the first aspect, the first information includes a target position of the light-transmitting structure of the first region; the controlling the robot to leave the first area according to the first information and the relative position relationship comprises: shooting a third image containing the robot through the light-transmitting structure according to the target position; according to the third image, data updating is carried out on the relative position relation; and controlling the robot to leave the first area according to the first information and the updated relative position relation.
In a possible implementation manner of the first aspect, the capturing a third image including the robot through the light-transmitting structure according to the target position includes: judging whether the terminal equipment can shoot the robot through the light-transmitting structure or not according to the real-time position and the target position of the terminal equipment; if the robot can be shot through the light-transmitting structure, shooting the third image through the light-transmitting structure; if the robot cannot be shot through the light-transmitting structure, the real-time position of the terminal equipment is adjusted, so that the robot can be shot through the target position by the light-transmitting structure.
In a possible implementation manner of the first aspect, the first information further includes an identifier of the first area; the controlling the robot to leave the first area according to the first information and the relative position relationship comprises: acquiring an area structure diagram corresponding to the first area according to the identifier; and controlling the robot to leave the first area according to the area structure diagram, the exit information and the relative position relation.
In a possible implementation manner of the first aspect, the controlling the robot to leave the first area according to the first information and the relative positional relationship includes: acquiring a second image acquired by the robot in real time; determining a first position of the robot in the first area according to the first information, the second image and the relative position relation; and controlling the robot to leave the first area according to the first position and the first information.
In a possible implementation manner of the first aspect, the controlling the robot to leave the first area according to the first position and the first information includes: generating a first path according to the first position and the first information; controlling the robot to move according to the first path; and in the process of the movement of the robot, if a preset updating condition is met, returning to the step of executing the step of generating the first path according to the first position and the first information so as to enable the robot to leave the first area.
In a possible implementation manner of the first aspect, the controlling the robot to leave the first area according to the first information and the relative positional relationship further includes: controlling the robot to move towards the outlet direction according to the first information; and in the process of the movement of the robot, the movement direction of the robot is adjusted according to the relative position relationship so as to enable the robot to leave the first area.
In a possible implementation manner of the first aspect, the adjusting the moving direction of the robot according to the relative position relationship to make the robot leave the first area includes: acquiring motion information of the robot; and adjusting the movement direction of the robot according to the movement information and the relative position relationship so as to enable the robot to leave the first area.
In a possible implementation manner of the first aspect, the controlling the robot to leave the first area according to the first information and the relative positional relationship includes: acquiring historical motion data of the robot in the first area; calculating the theoretical position of the robot according to the historical motion data and the first information; controlling the robot to move to the theoretical position according to the relation between the theoretical position and the relative position; and controlling the robot to leave the first area according to the theoretical position and the historical motion data.
A second aspect of embodiments of the present application provides a robot control apparatus, including:
the determining unit is used for determining the relative position relationship between the terminal equipment and the robot according to the signal strength of the communication signal between the terminal equipment and the robot;
an acquisition unit configured to acquire first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area;
and the control unit is used for controlling the robot to leave the first area according to the first information and the relative position relation.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
A fifth aspect of embodiments of the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the steps of the method.
In the embodiment of the application, the relative position relationship between the terminal device and the robot is determined according to the signal strength of the communication signal between the terminal device located outside the first area and the robot. Then, first information of a first area where the robot is located is acquired, the first information including an exit position. And finally, controlling the robot to leave the first area according to the first information and the relative position relation. Therefore, when the robot is trapped in the first area, the terminal equipment located outside the first area can establish communication with the robot, identify the exit of the first area and control the robot to leave the first area from the exit, and further rescue for the robot is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a first implementation of a robot control method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a second implementation of a robot control method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of acquiring a third image according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a third implementation of a robot control method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a fourth implementation of a robot control method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of determining a first position of a robot according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a fifth implementation of a robot control method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of a sixth implementation of a robot control method according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a robot control device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
With the development of science and technology, more and more robots are used to automatically serve humans. In the process of providing services by the robot, the robot may have the situations of damaged parts, malicious hijacking and the like, so that the robot is trapped in a certain area and cannot provide services normally.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a robot control method provided by an embodiment of the present application, where the method can be applied to a terminal device and is suitable for a situation where a robot to be rescued needs to be rescued and is trapped. The terminal equipment can be an unmanned aerial vehicle, a rescue robot and the like, and the terminal equipment is used for rescuing the robot to be rescued.
Specifically, the robot control method may include the following steps 101 to 103.
Step 101, determining the relative position relationship between the terminal device and the robot according to the signal strength of the communication signal between the terminal device and the robot.
The relative position relationship may include a relative distance between the terminal device and the robot, and may also include a direction of the robot relative to the terminal device.
It should be noted that before the signal strength is obtained, the terminal device needs to establish communication with the robot to be rescued. In practical applications, the robot may lose contact with a remote dispatching terminal due to poor signals in the trapped first area, and thus needs to be rescued. Therefore, in order to determine the positional relationship between the terminal device and the robot according to the signal strength, in some embodiments of the present application, communication with the robot to be rescued may be established through short-range communication, so as to avoid an influence of environmental factors on communication between the terminal device and the robot, and after the communication is established, the positional relationship between the terminal device and the robot is determined according to the signal strength. For example, communication with the robot to be rescued may be established using techniques such as Hlilink protocol, baiWIFI (IEEE802.11 protocol), Mesh, bluetooth, ZigBee/802.15.4 protocol, Thread/802.15.4 protocol, Z-Wave, NFC, UWB, LiFi, etc.
In some embodiments of the present application, before performing the rescue of the robot, especially when the robot and the terminal device establish a connection by short-range communication, the terminal device needs to move to the vicinity of the first area.
Specifically, in some embodiments of the present application, if the robot is capable of performing positioning, the terminal device may acquire a position of the robot before being trapped and move to the position. In other embodiments of the present application, historical motion data of the robot may be obtained, and a position of the robot may be estimated based on the historical motion data of the robot and moved to the position. Alternatively, in other embodiments of the present application, the terminal device may further establish communication with the robot through an equal-distance communication method such as satellite communication, so as to determine an approximate position of the robot and move to the position. Through the mode, the terminal can move to the position near the first area, further communicate with the robot at a distance, and control the robot to leave the first area according to the signal strength of the communication signal.
In the embodiment of the application, when the signal strength of a communication signal between a terminal device and a robot to be rescued is stronger, the closer the distance between the terminal device and the robot is; when the signal strength of a communication signal between the terminal device and the robot to be rescued is weaker, the longer the distance between the terminal device and the robot is. Therefore, the relative position relationship between the terminal device and the robot can be determined through the signal intensity.
Specifically, in some embodiments of the present application, experiments may be performed in advance to calculate a mapping relationship between the signal strength and the relative position relationship, and in practical applications, the relative position relationship between the terminal device and the robot to be rescued is determined by using the signal strength of the communication signal and the mapping relationship.
In other embodiments of the application, the terminal device may be moved, or the robot to be rescued may be controlled to move, and the displacement data may be recorded, and the relative position relationship between the terminal device and the robot may be calculated according to the displacement data and the signal strength.
If the relative position relationship is calculated by controlling the robot to be rescued to move, the displacement data can be calculated by the robot through a positioning module of the robot. In practical applications, the robot to be rescued may be trapped just due to the damage of the positioning module. Therefore, the displacement data may be recorded by a speedometer provided in the robot itself, or the displacement data may be calculated by measuring a distance by a laser sensor, a radar, or the like carried by the robot.
If the relative positional relationship is calculated by the mobile terminal device, the method used by the robot to determine the displacement data may be used. And when the terminal equipment is equipment such as unmanned aerial vehicle that can go up and down in the vertical direction, can also utilize the barometer of terminal equipment configuration to calculate the displacement data in the vertical direction.
Step 102, first information of the first area is obtained, wherein the first information includes an exit position of the first area.
The first area is an area where the robot is located, and there is a certain reason for considering that the robot is trapped, so for safety, the terminal device is located outside the first area. That is, the robot needs to be rescued by a terminal device located outside the first area to leave the first area.
Therefore, in the embodiment of the present application, the robot to be rescued may be controlled to leave the first area by acquiring the first information of the first area and using the exit position of the first area included in the first information.
In some embodiments of the present application, the first information may include a window position, a stair position, a water pipe position, a balcony position, and the like, in addition to the exit position. By acquiring these specific positions, a basis for feature recognition may be provided for subsequently guiding the robot away from the first area.
And 103, controlling the robot to leave the first area according to the first information and the relative position relation.
In the embodiment of the present application, after the first information and the relative positional relationship are acquired, the robot may be controlled to leave the first area through the exit position in the first information.
In the embodiment of the application, the relative position relationship between the terminal device and the robot is determined according to the signal strength of the communication signal between the terminal device located outside the first area and the robot. Then, first information of a first area where the robot is located is acquired, the first information including an exit position. And finally, controlling the robot to leave the first area according to the first information and the relative position relation. Therefore, when the robot is trapped in the first area, the terminal equipment located outside the first area can establish communication with the robot, identify the exit of the first area and control the robot to leave the first area from the exit, and further rescue for the robot is achieved.
In the foregoing step 102, the manner of acquiring the first information of the first area may be selected according to actual situations. For example, in some embodiments of the present application, a scheduling system or a monitoring system may exist in the first area, and at this time, the terminal device may receive first information sent by the scheduling system or the monitoring system in the first area.
In other embodiments of the present application, the first region may be identified by acquiring a first image of the first region; and carrying out image recognition on the first image to obtain first information.
Since the terminal device is located outside the first region, the first image of the first region acquired by the terminal device is an external image of the first region. Taking the first area as an example of a building, the first image acquired by the terminal device at this time is an appearance image of the building. In order to make the obtained relative position more accurate, the first image may be subjected to image recognition to identify a target position of the light-transmitting structure of the first region. That is, the first information may further include a target position of the light-transmitting structure of the first region.
The light-transmitting structure refers to a portion capable of transmitting light in the first region, and may be, for example, a window, a French window, a door, a transparent wall, a transparent covered roof, an uncovered roof, or the like. At this time, the terminal device can observe the robot through the light-transmitting structure, and adjust the relative position relationship accordingly.
Specifically, as shown in fig. 2, in some embodiments of the present application, the controlling the robot to leave the first area according to the first information and the relative position relationship may specifically include the following steps 201 to 203.
Step 201, a third image including the robot is captured through the target position.
In practical applications, the terminal device may not be able to acquire the third image at one time. Taking fig. 3 as an example for illustration, when the terminal device is the drone 301, the robot 302 may be just in the blind area of the field of view of the drone.
Therefore, in some embodiments of the present application, it can be determined whether the terminal device can shoot the robot through the light-transmitting structure according to the real-time position and the target position of the terminal device. If the robot can be shot through the light-transmitting structure, shooting a third image through the light-transmitting structure; if the robot cannot be shot through the light-transmitting structure, the real-time position of the terminal equipment is adjusted, so that the robot can be shot through the light-transmitting structure by the terminal equipment.
Taking fig. 3 as an example, when the drone 301 cannot shoot the robot 302 through the target position of the window 303, the drone 301 may fly horizontally upward to adjust its real-time position, and shoot the robot 302 through the target position of the window 303.
And step 202, updating data of the relative position relation according to the third image.
In practical application, because there may be signal reflection and the like in the first area where the robot is located, there may be an error in the determined relative positional relationship between the terminal device and the robot according to the signal strength of the communication signal between the terminal device and the robot. At this time, a more accurate relative positional relationship can be determined from the third image.
Specifically, by using the acquired third image, the terminal device may calculate an actual distance between itself and the robot by using a camera parameter calibrated in advance. In this case, an error amount due to the environment may be determined based on the actual distance and the relative positional relationship, and the relative positional relationship may be adjusted using the error amount.
For example, the determined relative distance between the terminal device and the robot is 5 meters according to the signal strength of the communication signal between the terminal device and the robot, the actual distance between the terminal device and the robot is 15 meters by using the third image, and the error amount caused by the environment is determined to be 10 meters. In this case, the relative positional relationship can be adjusted based on the error amount.
When the robot is controlled to leave the first area by using the relative position relationship subsequently, the terminal device may not continuously acquire the image including the robot, so that the relative position relationship can be directly adjusted according to the error amount to determine an accurate relative position relationship.
And step 203, controlling the robot to leave the first area according to the first information and the updated relative position relation.
In an embodiment of the present application, a target position of a target object is recognized, and a third image including a robot is captured through the target position. Therefore, data updating can be carried out on the relative position relation by using the third image, the accuracy of the relative position relation is improved, and the efficiency of controlling the robot to leave the first area is improved.
In other embodiments of the present application, the first area in which the robot is located may have an identifier for uniquely identifying the first area. Such as house numbers, identifying shapes of buildings, plaque carrying building names, etc.
At this time, the first information acquired by the terminal device may further include an identifier of the first area. As shown in fig. 4, the controlling the robot to leave the first area may further include the following steps 401 to 402.
Step 401, obtaining an area structure diagram corresponding to the first area according to the identifier.
The area structure diagram carries information of a path, an obstacle, an exit, and the like of the first area, and may be, for example, an electronic map, a floor plan, a spatial structure model diagram, and the like of the area. Since the identifier can uniquely identify the first area, it can be ensured that the area structure diagram acquired according to the identifier is the area structure diagram of the first area.
Specifically, the map of the area structure may be acquired from a regulatory agency such as a government agency, a construction company, or a campus.
And 402, controlling the robot to leave the first area according to the area structure diagram, the outlet information and the relative position relation.
In some embodiments of the application, after the area structure diagram is obtained, a path from the current position of the robot to the exit may be determined according to the exit information and the relative position relationship.
In the embodiment of the present application, an area configuration diagram corresponding to the first area is acquired based on the identifier. And then, controlling the robot to leave the first area according to the area structure diagram, the exit information and the relative position relation. Since the area structure map contains the specific structure information of the first area, that is, contains some information missing from the first information. For example, obstacle information such as a potted plant, specific information inside a corridor, and the like, and therefore, the accuracy of controlling the robot to leave the first area using the area configuration map is higher.
In an embodiment of the application, different implementations may be adopted for controlling the robot to leave the first area according to the first information and the relative position relationship, and the implementation may be specifically selected according to actual situations.
In some embodiments of the present application, if the camera of the robot can be used normally, the robot may be further controlled to leave the first area by using the image collected by the robot.
Specifically, as shown in fig. 5, the controlling the robot to leave the first area using the image captured by the robot may include the following steps 501 to 503.
And step 501, acquiring a second image acquired by the robot in real time.
Specifically, the terminal device may obtain the second image acquired by the robot through a pre-established communication relationship.
The content of the second image and the first information are different, but some characteristics are related to the first information. Taking the example that the robot is trapped in the building, the first image acquired by the unmanned aerial vehicle at this time is the appearance image of the building, and the first information such as the balcony position and the window position can be identified from the appearance image. The second image is an internal image of the building, and the contents of the two images are different, but the second image may include contents of balconies, windows, pipelines and the like corresponding to the first information.
Step 502, determining a first position of the robot in the area according to the first information, the second image and the relative position relationship.
In some embodiments of the application, after the second image is acquired, a first position of the robot in the area may be determined according to the first information, the second image and the relative position relationship.
Specifically, the second image may be subjected to image recognition, and compared with the first information, and then the first position of the robot in the area is determined according to the comparison result and the relative position relationship.
Taking fig. 6 as an example, the robot is trapped in a building, and the first information may further include a position of a target object such as a window, a pillar, and a balcony. Taking the above terminal device as the unmanned aerial vehicle 603 as an example, it can be determined that the robot 604 is located on the illustrated arc 601 according to the signal strength of the communication signal between the unmanned aerial vehicle 603 and the robot 604. However, the building is often multi-story, and it cannot be determined at which specific position the robot is located. As shown in fig. 6, the drone 603 may capture a first image containing a window 602 and identify the window 602; the second image captured by the robot 604 also includes the window 602, and thus the window 602 is also identified from the second image. At this time, by comparison, it can be determined that the floor where the robot 604 is located should be the floor including the window 602, and thus the first position where the robot 604 is located in the area is determined.
Further, when the first area is a multi-floor area, the first information may include information such as elevator structure information and the number of windows, and the number of floors in the first area may be calculated from the first information, and in this case, the specific number of floors in the area where the robot is located may be determined based on the first information, the second image, and the relative positional relationship.
And step 503, controlling the robot to leave the first area according to the first position and the first information.
The mode for controlling the robot to leave the first area can be selected according to actual conditions. In some embodiments of the present application, a first path may be generated according to the first position and the first information, and the robot may be controlled to leave the first area according to the first path.
Specifically, the terminal device may establish the primary electronic map according to the exit position and the mutual position relationship. The primary electronic map can be a three-dimensional coordinate map, a topological map and the like. That is, a two-dimensional or three-dimensional coordinate system can be established with the first position of the target robot as the initial position and the exit position as the target position, and the primary electronic map can be obtained. At this time, global path planning can be performed by using the primary electronic map.
Since part of the content of the primary electronic map may be unknown, that is, part of the feasible path and obstacles in the primary electronic map are unknown, a short-range map may be established according to the images captured by the robot during the movement process, and the short-range map is used to supplement the global primary electronic map in real time to update the path.
That is, in some embodiments of the present application, a first path may be generated according to the first position and the first information, and the robot may be controlled to move according to the first path. And in the process of the movement of the robot, if a preset updating condition is met, returning to the step of generating the first path according to the first position and the first information so as to enable the robot to leave the first area.
Specifically, after the planning of the global path is completed through the primary electronic map, the robot is moved on the basis of the first path; when the robot can acquire the image of the surrounding environment of the robot in a camera or radar mode, the image of the surrounding environment is used as a short-distance map, information such as the positions of obstacles and feasible paths in the image is obtained through processing in an image identification mode, and the positions of the obstacles and the feasible paths are converted into coordinate points on the short-distance map. Then, on each short-distance map, a temporary target position is set based on the first path, so that the position of the robot when the short-distance map is acquired is used as a temporary current position, and the first path is corrected to obtain a new first path.
The preset updating condition is a condition that the first path needs to be updated, and can be selected according to actual conditions. For example, the current time may be a preset time, that is, the path planning may be performed again at preset time intervals according to an image supplementary electronic map captured by the robot during the movement process. Alternatively, the preset updating condition may be that new content different from the primary electronic map appears in the short-range map, that is, when the primary electronic map is updated, the path planning is performed again. Still alternatively, the preset update condition may be that the robot encounters an obstacle.
The specific algorithm of the path planning may also be selected according to actual situations, for example, a simulated annealing algorithm, an artificial potential field method, a fuzzy logic algorithm, a tabu search algorithm, and the like may be used.
In some embodiments of the application, if the terminal device acquires the area structure diagram corresponding to the first area through the identifier of the first area, the area structure diagram and the second image may be matched, and the specific position of the robot in the area structure diagram may be determined according to the above-mentioned relative position relationship. At this time, a path planning may be performed according to the position of the robot and the exit position in the area structure diagram to control the robot to leave the first area.
For example, after the area structure map is acquired, the position of the robot in the area structure map may be determined according to information such as a window and a staircase included in the second image and the relative positional relationship between the robot and the terminal device. And then determining the exit position of the exit in the area structure diagram according to the first information, and planning a path from the position of the robot to the exit position by using the area structure diagram so as to control the robot to leave the first area.
The method may be that the terminal device sends a control instruction to the robot, the robot performs path planning, and leaves the first area, or the terminal device sends the obtained first path to the robot, and the robot leaves the first area according to the first path.
In the embodiment of the application, a second image acquired by the robot in real time is acquired, and the first position of the robot in the first area is determined according to the first image, the second image and the relative position relation. Then, according to the first position and the first information, the robot can be controlled to leave the first area, and the robot is rescued.
In other embodiments of the present application, the robot may not be equipped with a camera, or the camera may be damaged, and at this time, the terminal device may control the robot to continuously try to find a way until leaving the first area.
Specifically, as shown in fig. 7, the controlling robot continuously tries to find a way until leaving the first area may include the following steps 701 to 702.
And 701, controlling the robot to move towards the outlet direction according to the first information.
The outlet direction is a direction of the outlet with respect to the robot.
In some embodiments of the present application, the terminal device may move to the exit position after acquiring the first information including the exit position. At the moment, the robot can be controlled to move towards the terminal equipment, and the robot can be controlled to move towards the outlet direction.
In other embodiments of the present application, a relative position between the exit and the terminal device may be obtained according to the first information, and then the robot may be controlled to move toward the exit according to the relative position between the robot and the terminal device.
Specifically, the relative positional relationship between the outlet and the terminal device may be obtained by the terminal device through the acquired first image including the outlet, and the coordinate position of the outlet relative to the terminal device is calculated. Or the distance between the exit and the terminal device may be measured by a device with a ranging function configured on the terminal device, for example, a laser radar, an infrared sensor, etc. configured on the terminal device, and then the coordinate position of the exit relative to the terminal device is determined according to the measured distance.
After the relative position between the outlet and the terminal device is obtained, the direction of the outlet relative to the robot can be determined according to the relative position between the robot and the terminal device, and then the robot is controlled to move towards the outlet direction.
And step 702, in the process of the movement of the robot, adjusting the movement direction of the robot according to the relative position relationship so as to enable the robot to leave the first area.
In some embodiments of the application, during the process that the robot moves to the first direction, the terminal device may continuously obtain the relative position relationship, and since the terminal device is located at the exit position, when the robot moves to the first direction, and the relative distance between the robot and the terminal device is further and further or unchanged, it indicates that the robot is not close to the exit, and therefore the moving direction of the robot needs to be adjusted. If the robot moves towards the first direction and the relative distance between the robot and the terminal device is closer and closer, it indicates that the robot is approaching the exit, and it is not necessary to adjust the moving direction of the robot. By continuously adjusting the movement direction, the robot can finally leave the first area.
Further, in some embodiments of the present application, when the moving direction of the robot is adjusted, the moving information of the robot may also be acquired. Then, the moving direction of the robot is adjusted according to the movement information and the relative position relationship, so that the robot leaves the first area.
The motion information refers to current motion state information of the robot.
For example, the motion information may be current displacement data of the robot. If the displacement of the robot is smaller than the preset displacement threshold within the preset duration, it indicates that the robot may be blocked by the obstacle. At this time, the movement direction of the robot can be adjusted according to the relative position relationship, so that the robot can avoid the obstacle and continuously approach to the exit.
The preset duration and the preset displacement threshold can be adjusted according to actual conditions.
For another example, the motion data may be current speed data of the robot. Generally, when a robot collides with an obstacle, the speed of the robot may be abnormally abruptly changed, i.e., abruptly decelerated and gradually restored. Therefore, if the speed of the robot has an abnormal sudden change according to the speed data of the robot, the collision between the robot and the obstacle can be described. At this time, the movement direction of the robot can be adjusted according to the relative position relationship, so that the robot can avoid the obstacle and continuously approach to the exit.
In the embodiment of the application, the terminal device is controlled to move to the exit position according to the first information, and the robot is controlled to move to the first direction after the terminal device moves to the exit position. Then, in the process of the movement of the robot, the movement direction of the robot is adjusted according to the relative position relation, so that the robot leaves the first area, and the rescue of the robot is realized.
In other embodiments of the present application, the robot may be configured with a device such as an odometer capable of recording motion data, and may automatically record its motion data during movement. But is further trapped in the first area due to damage to the device recording the motion data at a certain location. At this time, the terminal device may analyze the historical movement data to control the robot to leave the first area.
Specifically, as shown in fig. 8, the analyzing the historical motion data to control the robot to leave the first area may include the following steps 801 to 804.
Step 801, acquiring historical motion data of the robot in a first area.
The historical motion data may include displacement data, direction data, speed data, and the like of the robot in the first area.
In some embodiments of the present application, the terminal device may directly receive the historical motion data transmitted by the robot. In other embodiments of the present application, since the motion information is sent to the scheduling terminal in real time when the robot is in a normal state, the terminal device may also obtain historical motion data of the robot sent by the scheduling terminal.
And step 802, calculating the theoretical position of the robot according to the historical motion data and the first information.
The theoretical position is a position where the robot has an abnormality, and due to the abnormality, the robot may be disconnected from the terminal device or leave a pre-planned working route.
In some embodiments of the present application, through the historical motion data and the first information, the motion direction and the motion distance of the robot from the exit can be calculated, and then the theoretical position of the robot can be calculated.
And 803, controlling the robot to move to the theoretical position according to the relation between the theoretical position and the relative position.
And step 804, controlling the robot to leave the first area according to the theoretical position and the historical motion data.
In some embodiments of the present application, after the theoretical position is calculated, the robot may be controlled to move to the theoretical position first according to the relative position relationship by using the specific implementation of fig. 5 or fig. 7. At this time, the robot is controlled to leave the first area based on the theoretical position and the historical motion data.
It will be appreciated that after the robot reaches the theoretical position, it is only necessary to invert the specific movement from the exit to the theoretical position based on the historical movement data, i.e. to return to the exit from the theoretical position.
It should be noted that, the manner of controlling the robot to leave the first area according to the theoretical position and the historical motion data may be various. For example, in some embodiments of the present application, the terminal device may determine, according to the theoretical position and the historical movement data, movement data required by the robot to leave the first area, and send the movement data to the robot, and the robot leaves the first area directly according to the movement data. For another example, in other embodiments of the present application, the terminal device may directly send a control instruction to the robot to control the robot to determine the motion data required to leave the first area.
In the embodiment of the application, historical motion data of the robot in the first area is obtained to calculate the theoretical position of the robot. And then, controlling the robot to move to the theoretical position according to the theoretical position and the relative position relation. And further controlling the robot to leave the first area according to the theoretical position and the historical motion data. The robot is rescued. And, under this kind of circumstances, directly utilize historical data to go back to the export from theoretical position, compare and utilize route planning, or utilize the mode that unmanned aerial vehicle continuously controlled the robot and look for the direction of motion, the calculated amount is lower, and rescue efficiency is higher.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
As shown in fig. 9, a schematic structural diagram of a robot control device 900 according to an embodiment of the present application is provided, where the robot control device 900 is configured on a terminal device, and the robot control device 900 may include: a determination unit 901, an acquisition unit 902, and a control unit 903.
A determining unit 901, configured to determine a relative positional relationship between the terminal device and the robot according to a signal strength of a communication signal between the terminal device and the robot;
an obtaining unit 902, configured to obtain first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area;
a control unit 903, configured to control the robot to leave the first area according to the first information and the relative position relationship.
In some embodiments of the present application, the first information includes a target position of the light-transmitting structure of the first region; the control unit 903 is further specifically configured to: shooting a third image containing the robot through the light-transmitting structure according to the target position; according to the third image, data updating is carried out on the relative position relation; and controlling the robot to leave the first area according to the first information and the updated relative position relation.
In some embodiments of the present application, the control unit 903 is further specifically configured to: judging whether the terminal equipment can shoot the robot through the target position or not according to the real-time position of the terminal equipment and the target position; if the robot can be shot through the light-transmitting structure, shooting the third image through the light-transmitting structure; if see through the light-transmitting structure can not shoot the robot, then adjust terminal equipment's real-time position to make terminal equipment can see through the light-transmitting structure shoots the robot.
In some embodiments of the present application, the control unit 903 is further specifically configured to: the controlling the robot to leave the first area according to the first information and the relative position relationship comprises: acquiring an area structure diagram corresponding to the first area according to the identifier; and controlling the robot to leave the first area according to the area structure diagram, the first information and the relative position relation.
In some embodiments of the present application, the control unit 903 is further specifically configured to: acquiring a second image acquired by the robot in real time; determining a first position of the robot in the first area according to the first information, the second image and the relative position relation; and controlling the robot to leave the first area according to the first position and the first information.
In some embodiments of the present application, the control unit 903 is further specifically configured to: generating a first path according to the first position and the first information; controlling the robot to move according to the first path; and in the process of the movement of the robot, if a preset updating condition is met, returning to the step of executing the step of generating the first path according to the first position and the first information so as to enable the robot to leave the first area.
In some embodiments of the present application, the control unit 903 is further specifically configured to: controlling the robot to move towards the outlet direction according to the first information; and in the process of the movement of the robot, the movement direction of the robot is adjusted according to the relative position relationship so as to enable the robot to leave the first area.
In some embodiments of the present application, the control unit 903 is further specifically configured to: acquiring motion information of the robot; and adjusting the movement direction of the robot according to the movement information and the relative position relationship so as to enable the robot to leave the first area.
In some embodiments of the present application, the control unit 903 is further specifically configured to: acquiring historical motion data of the robot in the first area; calculating the theoretical position of the robot according to the historical motion data and the first information; controlling the robot to move to the theoretical position according to the relation between the theoretical position and the relative position; and controlling the robot to leave the first area according to the theoretical position and the historical motion data.
It should be noted that, for convenience and simplicity of description, the specific working process of the robot control device 900 may refer to the corresponding process of the method described in fig. 1 to fig. 8, and is not described herein again.
Fig. 10 is a schematic diagram of a terminal according to an embodiment of the present application. The terminal 100 may include: a processor 1000, a memory 1001 and a computer program 1002, such as a robot control device program, stored in said memory 1001 and executable on said processor 1000. The processor 1000, when executing the computer program 1002, implements the steps in the various robot control method embodiments described above, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 1000, when executing the computer program 1002, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the units 901 to 903 shown in fig. 9.
The computer program may be partitioned into one or more modules/units that are stored in the memory 1001 and executed by the processor 1000 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal. For example, the computer program may be divided into a determination unit, an acquisition unit and a control unit. The specific functions of each unit are as follows:
the determining unit is used for determining the relative position relationship between the terminal equipment and the robot according to the signal strength of the communication signal between the terminal equipment and the robot;
an acquisition unit configured to acquire first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area;
and the control unit is used for controlling the robot to leave the first area according to the first information and the relative position relation.
The terminal can be unmanned aerial vehicle, smart mobile phone, robot, desktop computer, notebook, palm computer and cloud server etc. computing device. The terminal may include, but is not limited to, a processor 1000, a memory 1001. Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 1000 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 1001 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 1001 may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the memory 1001 may also include both an internal storage unit and an external storage device of the terminal. The memory 1001 is used for storing the computer program and other programs and data required by the terminal. The memory 1001 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. A robot control method, characterized by comprising:
determining the relative position relationship between the terminal equipment and the robot according to the signal strength of a communication signal between the terminal equipment and the robot;
acquiring first information of a first area; the first information includes an exit location of the first region; the terminal equipment is positioned outside the first area;
and controlling the robot to leave the first area according to the first information and the relative position relation.
2. The robot control method according to claim 1, wherein the first information includes a target position of a light-transmitting structure of the first region;
the controlling the robot to leave the first area according to the first information and the relative position relationship comprises:
shooting a third image containing the robot through the light-transmitting structure according to the target position;
according to the third image, data updating is carried out on the relative position relation;
and controlling the robot to leave the first area according to the first information and the updated relative position relation.
3. The robot control method of claim 2, wherein said capturing a third image containing the robot through the light-transmissive structure based on the target position comprises:
judging whether the terminal equipment can shoot the robot through the target position or not according to the real-time position of the terminal equipment and the target position;
if the robot can be shot through the light-transmitting structure, shooting the third image through the light-transmitting structure;
if see through the light-transmitting structure can not shoot the robot, then adjust terminal equipment's real-time position to make terminal equipment can see through the light-transmitting structure shoots the robot.
4. The robot control method according to claim 1, wherein the first information further includes an identifier of the first area;
the controlling the robot to leave the first area according to the first information and the relative position relationship comprises:
acquiring an area structure diagram corresponding to the first area according to the identifier;
and controlling the robot to leave the first area according to the area structure diagram, the first information and the relative position relation.
5. The robot control method according to any one of claims 1 to 4, wherein the controlling the robot to leave the first area based on the first information and the relative positional relationship includes:
acquiring a second image acquired by the robot in real time;
determining a first position of the robot in the first area according to the first information, the second image and the relative position relation;
and controlling the robot to leave the first area according to the first position and the first information.
6. The robot control method of claim 5, wherein said controlling the robot to leave the first area based on the first location and the first information comprises:
generating a first path according to the first position and the first information;
controlling the robot to move according to the first path;
and in the process of the movement of the robot, if a preset updating condition is met, returning to the step of executing the step of generating the first path according to the first position and the first information so as to enable the robot to leave the first area.
7. The robot control method according to any one of claims 1 to 4, wherein the controlling the robot to leave the first area based on the first information and the relative positional relationship further comprises:
controlling the robot to move towards the outlet direction according to the first information;
and in the process of the movement of the robot, adjusting the movement direction of the robot according to the relative position relation until the robot leaves the first area.
8. The robot control method according to claim 7, wherein the adjusting the moving direction of the robot so as to move the robot away from the first area based on the relative positional relationship includes:
acquiring motion information of the robot;
and adjusting the movement direction of the robot according to the movement information and the relative position relationship so as to enable the robot to leave the first area.
9. The robot control method according to any one of claims 1 to 4, wherein the controlling the robot to leave the first area based on the first information and the relative positional relationship includes:
acquiring historical motion data of the robot in the first area;
calculating the theoretical position of the robot according to the historical motion data and the first information;
controlling the robot to move to the theoretical position according to the relation between the theoretical position and the relative position;
and controlling the robot to leave the first area according to the theoretical position and the historical motion data.
10. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 9 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202010753106.4A 2020-07-30 2020-07-30 Robot control method, device, terminal and medium Active CN111954188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010753106.4A CN111954188B (en) 2020-07-30 2020-07-30 Robot control method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010753106.4A CN111954188B (en) 2020-07-30 2020-07-30 Robot control method, device, terminal and medium

Publications (2)

Publication Number Publication Date
CN111954188A true CN111954188A (en) 2020-11-17
CN111954188B CN111954188B (en) 2024-01-19

Family

ID=73338311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010753106.4A Active CN111954188B (en) 2020-07-30 2020-07-30 Robot control method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN111954188B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114538228A (en) * 2021-11-29 2022-05-27 北京云迹科技股份有限公司 Robot recovery mechanism method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158235A1 (en) * 2010-12-20 2012-06-21 Mckesson Automation, Inc. Methods, apparatuses and computer program products for utilizing near field communication to guide robots
CN105739500A (en) * 2016-03-29 2016-07-06 海尔优家智能科技(北京)有限公司 Interaction control method and device of intelligent sweeping robot
CN108281789A (en) * 2018-01-12 2018-07-13 深圳市道通智能航空技术有限公司 Blind area tracking, its device and the mobile tracing system of directional aerial
CN108594806A (en) * 2018-04-03 2018-09-28 深圳市沃特沃德股份有限公司 Sweeper is got rid of poverty method and apparatus
CN109394086A (en) * 2018-11-19 2019-03-01 珠海市微半导体有限公司 A kind of walk on method, apparatus and chip based on trapped clean robot
CN109471429A (en) * 2018-09-29 2019-03-15 北京奇虎科技有限公司 A kind of robot cleaning method, device and electronic equipment
CN109528089A (en) * 2018-11-19 2019-03-29 珠海市微半导体有限公司 A kind of walk on method, apparatus and the chip of stranded clean robot
CN110338708A (en) * 2019-06-21 2019-10-18 华为技术有限公司 A kind of the cleaning control method and equipment of sweeping robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158235A1 (en) * 2010-12-20 2012-06-21 Mckesson Automation, Inc. Methods, apparatuses and computer program products for utilizing near field communication to guide robots
CN105739500A (en) * 2016-03-29 2016-07-06 海尔优家智能科技(北京)有限公司 Interaction control method and device of intelligent sweeping robot
CN108281789A (en) * 2018-01-12 2018-07-13 深圳市道通智能航空技术有限公司 Blind area tracking, its device and the mobile tracing system of directional aerial
CN108594806A (en) * 2018-04-03 2018-09-28 深圳市沃特沃德股份有限公司 Sweeper is got rid of poverty method and apparatus
CN109471429A (en) * 2018-09-29 2019-03-15 北京奇虎科技有限公司 A kind of robot cleaning method, device and electronic equipment
CN109394086A (en) * 2018-11-19 2019-03-01 珠海市微半导体有限公司 A kind of walk on method, apparatus and chip based on trapped clean robot
CN109528089A (en) * 2018-11-19 2019-03-29 珠海市微半导体有限公司 A kind of walk on method, apparatus and the chip of stranded clean robot
CN110338708A (en) * 2019-06-21 2019-10-18 华为技术有限公司 A kind of the cleaning control method and equipment of sweeping robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114538228A (en) * 2021-11-29 2022-05-27 北京云迹科技股份有限公司 Robot recovery mechanism method and device

Also Published As

Publication number Publication date
CN111954188B (en) 2024-01-19

Similar Documents

Publication Publication Date Title
US20200333789A1 (en) Information processing apparatus, information processing method, and medium
US10139817B2 (en) Unmanned aircraft systems and methods to interact with specifically intended objects
CN109933064B (en) Multi-sensor safety path system for autonomous vehicles
CN111968262B (en) Semantic intelligent substation inspection operation robot navigation system and method
US20190286145A1 (en) Method and Apparatus for Dynamic Obstacle Avoidance by Mobile Robots
KR100785784B1 (en) System and method for calculating locations by landmark and odometry
JP2020502654A (en) Human-machine hybrid decision-making method and apparatus
CN113256716B (en) Control method of robot and robot
CN111784748A (en) Target tracking method and device, electronic equipment and mobile carrier
CN112859873A (en) Semantic laser-based mobile robot multi-stage obstacle avoidance system and method
CN111947644B (en) Outdoor mobile robot positioning method and system and electronic equipment thereof
CN112817313B (en) Positioning and navigation system and method of intelligent shopping cart for shopping mall
CN111123964A (en) Unmanned aerial vehicle landing method and device and computer readable medium
CN115752462A (en) Method, system, electronic equipment and medium for inspecting key inspection targets in building
CN111954188B (en) Robot control method, device, terminal and medium
CN114527763A (en) Intelligent inspection system and method based on target detection and SLAM composition
US20210125369A1 (en) Drone-assisted sensor mapping
CN112327868A (en) Intelligent robot automatic navigation system
JP2022526071A (en) Situational awareness monitoring
CN116629106A (en) Quasi-digital twin method, system, equipment and medium for mobile robot operation scene
WO2022004333A1 (en) Information processing device, information processing system, information processing method, and program
Legovich et al. Integration of modern technologies for solving territory patroling problems with the use of heterogeneous autonomous robotic systems
CN110900603B (en) Method, medium, terminal and device for identifying elevator through geometric features
CN116700228A (en) Robot path planning method, electronic device and readable storage medium
CN113064425A (en) AGV equipment and navigation control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant