CN114643580B - Robot control method, device and equipment - Google Patents

Robot control method, device and equipment Download PDF

Info

Publication number
CN114643580B
CN114643580B CN202210325814.7A CN202210325814A CN114643580B CN 114643580 B CN114643580 B CN 114643580B CN 202210325814 A CN202210325814 A CN 202210325814A CN 114643580 B CN114643580 B CN 114643580B
Authority
CN
China
Prior art keywords
robot
camera
tof
sub
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210325814.7A
Other languages
Chinese (zh)
Other versions
CN114643580A (en
Inventor
王春茂
张文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN202210325814.7A priority Critical patent/CN114643580B/en
Publication of CN114643580A publication Critical patent/CN114643580A/en
Application granted granted Critical
Publication of CN114643580B publication Critical patent/CN114643580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The application provides a robot control method, a device and equipment, wherein a motion area of a robot provided with a TOF camera is divided into at least one sub-area, each sub-area corresponds to at least one preset and different camera codes, and the camera codes corresponding to adjacent sub-areas are different, comprising the following steps: predicting the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot; determining whether the number of robots in the next sub-area reaches a preset threshold at the cross-over time; if not, the unoccupied camera code corresponding to the next sub-area at the cross-boundary moment is issued to the robot, so that after the robot reaches the next sub-area, the anti-interference parameters corresponding to the camera code are determined, and the work of the deployed TOF cameras on the robot is controlled based on the anti-interference parameters, so that a large number of robot anti-interference with the TOF cameras is realized.

Description

Robot control method, device and equipment
Technical Field
The present application relates to the field of robots, and in particular, to a method, an apparatus, and a device for controlling a robot.
Background
The principle of a TOF (Time of Flight) camera is: the TOF camera emits modulated light that is reflected upon encountering an object. The TOF camera calculates the distance between the TOF camera and the object by calculating the time difference or phase difference between the light emission and the light reflection, and generates a depth image or a three-dimensional image of the object.
Since the TOF camera can detect the distance between the TOF camera and the object, in the field of robots, the TOF camera is typically deployed on the robot to realize obstacle avoidance of the robot. However, when there are multiple robots with TOF cameras deployed in one motion area, different TOF cameras can create interference problems. For example, light emitted by a TOF camera a deployed on a robot a is received by a TOF camera B on a robot B, so that the TOF camera B cannot normally calculate a distance from an object to the TOF camera B, and the robot B cannot avoid an obstacle correctly.
In order to solve the problem of interference of multiple TOF cameras, a plurality of anti-interference methods are proposed, but the existing anti-interference method of multiple TOF cameras can only prevent interference of a limited number of TOF cameras in a certain area. In an actual intelligent warehousing scene, the number of robots with the TOF cameras can reach hundreds of robots, and the anti-interference effect is poor by adopting the existing TOF camera anti-interference method in the scene.
Disclosure of Invention
In view of the above, the present application provides a robot control method, apparatus and device for implementing anti-interference of a large number of TOF cameras.
Specifically, the application is realized by the following technical scheme:
according to a first aspect of the present application, a robot control method is provided, where the method is applied to a server, a motion area where a robot configured with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and camera codes corresponding to adjacent sub-areas are different; each sub-zone is different in assigned camera code for robots within the zone, the method comprising:
predicting the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot;
determining whether the number of robots in the next subarea reaches a preset threshold value at the cross-over moment;
if not, issuing an unoccupied camera code corresponding to the next sub-area at the cross-boundary moment to the robot, so that after the robot reaches the next sub-area, determining an anti-interference parameter corresponding to the camera code, and controlling the deployed TOF camera on the robot to work based on the anti-interference parameter.
Optionally, the method further comprises:
if yes, controlling the robot to stop moving;
and triggering the robot to move when the number of robots in the next sub-area is monitored to be lower than the preset threshold value, distributing unoccupied camera codes corresponding to the next sub-area for the robot, determining anti-interference parameters corresponding to the camera codes and matched with a locally configured anti-interference method after the robot reaches the next sub-area, and controlling the deployed TOF cameras on the robot to work based on the anti-interference parameters.
Optionally, the method further comprises:
and when the robot is detected to drive into the next sub-area, recovering the camera code of the robot in the sub-area.
Optionally, the size of the sub-region is related to a distance threshold at which any two TOF cameras do not interfere;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
In the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
Optionally, the divided sub-areas are arranged in a honeycomb shape.
Optionally, if the current position information is reported by the robot when determining that the distance from the current position to the region boundary of the robot is within a preset range, executing the robot control method when receiving the current position information reported by the robot;
or alternatively, the process may be performed,
and if the current position information is periodically reported by the robot, executing the robot control method when determining that the distance from the current position to the region boundary of the robot is in a preset range.
Optionally, the motion information includes: the path information of the robot reaching the task target and the motion rate of the robot;
the predicting the cross-boundary moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot comprises the following steps:
determining a next sub-area to be reached by the robot based on path information of the robot reaching a task target, the current position information and the motion area division condition, and determining an area boundary between the current sub-area and the next sub-area;
Determining a path from a current position to the region boundary based on path information of the robot reaching a task target and the current position information;
and determining the crossing moment of the robot reaching the region boundary based on the distance and the movement rate of the robot.
According to a second aspect of the present application, there is provided a robot control method applied to a robot configured with a TOF camera; the movement area of the robot is divided into at least one sub-area; each sub-region corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-regions are different; each sub-zone is different in assigned camera code for robots within the zone, the method comprising:
the method comprises the steps that current position information of a robot is reported to a server side, so that the server side predicts the crossing moment when the robot reaches the region boundary of a sub-region and a next sub-region based on the reported current position information and the acquired motion information of the robot, and when the number of robots in the next sub-region does not reach a preset threshold value at the crossing moment, the server side predicts and sends an unoccupied camera code corresponding to the next sub-region at the crossing moment;
Receiving a camera code issued by the server;
and determining an anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area.
Optionally, in the case that the operation mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter includes a light emission time delay; the server is synchronous with the TOF camera clocks deployed on the robots;
the determining the anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area comprises the following steps:
determining the luminous time delay corresponding to the camera code; wherein, the luminous time delay corresponding to different camera codes is different;
and sending the light-emitting time delay control to a TOF camera deployed on the robot, so that the TOF camera emits light according to the light-emitting time delay.
Optionally, in the case that the operation mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter includes a modulation frequency;
the determining the anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area comprises the following steps:
Determining a modulation frequency corresponding to the camera code; wherein, the modulation frequencies corresponding to different camera codes are different;
and sending the modulation frequency to a TOF camera deployed by the robot, so that the TOF camera sends light modulated by the modulation frequency.
Optionally, in the case that the working mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code;
the determining an anti-interference parameter corresponding to the camera code and matched with a locally configured anti-interference method, and controlling the TOF camera on the robot to work based on the anti-interference parameter comprises the following steps:
and sending the camera code to a TOF camera on the robot, so that the TOF camera takes the camera code as a code word of laser pulse code, codes laser pulses to be emitted according to the code word, and sends the coded laser pulses.
According to a third aspect of the present application, there is provided a robot control device, the device being applied to a server, a motion area of a robot provided with a TOF camera being divided into at least one sub-area, each sub-area corresponding to at least one preset camera code, camera codes corresponding to adjacent sub-areas being different; each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
The prediction unit is used for predicting the crossing time of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot;
a determining unit, configured to determine whether the number of robots in the next sub-area reaches a preset threshold at the cross-boundary time;
and the issuing unit is used for issuing the unoccupied camera code corresponding to the next sub-area at the cross-boundary moment to the robot if not, so that the robot determines the anti-interference parameter corresponding to the camera code after reaching the next sub-area, and controls the deployed TOF camera on the robot to work based on the anti-interference parameter.
According to a fourth aspect of the present application, there is provided a robot control device applied to a robot configured with a TOF camera; the movement area of the robot is divided into at least one sub-area; each sub-region corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-regions are different; each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
The server side predicts the crossing time of the robot reaching the region boundary of the sub-region and the next sub-region based on the reported current position information and the acquired motion information of the robot, and predicts and transmits the unoccupied camera codes corresponding to the next sub-region at the crossing time when determining that the number of the robots in the next sub-region does not reach the preset threshold at the crossing time;
the receiving unit is used for receiving the camera code issued by the server;
and the determining unit is used for determining the anti-interference parameters corresponding to the camera codes and controlling the TOF camera on the robot to work based on the anti-interference parameters after reaching the next sub-area.
According to a fifth aspect of the present application, there is provided an electronic device comprising a readable storage medium and a processor;
wherein the readable storage medium is for storing machine executable instructions;
the processor is configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the robot control method described above.
The application provides a robot control method for partition control. On the one hand, the server side ensures that the number of TOF cameras in each subarea is smaller than the maximum TOF cameras supported by the existing anti-interference mode by controlling the movement of the robot, and ensures that a plurality of TOF cameras in each subarea are not mutually interfered. On the other hand, by configuring different camera codes representing anti-interference parameters for adjacent subareas, the plurality of TOF cameras of the adjacent subareas are not mutually interfered. Based on the two aspects, the anti-interference of the maximum number of TOF cameras supported by the existing anti-interference method and the anti-interference of the TOF cameras between adjacent areas can be realized for each sub-area, so that the problem of interference of a large number of TOF cameras is solved.
Drawings
FIG. 1 is a networking architecture diagram of a robotic control system according to an exemplary embodiment of the application;
FIG. 2 is a schematic diagram illustrating a motion zone division according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a method of robot control according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a robot motion profile shown in an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a method of robot control according to an exemplary embodiment of the present application;
FIG. 6 is a hardware configuration diagram of an electronic device according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a robotic control device according to an exemplary embodiment of the application;
fig. 8 is a block diagram of another robot control device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The existing anti-interference method can only prevent a certain number of TOF cameras in a certain area from interfering with each other. For scenes with a large number of TOF cameras, the existing anti-interference mode is poor in effect.
In view of this, the present application proposes a method for controlling a robot by partitioning, in which a motion area of the robot is divided into a plurality of sub-areas, and each sub-area adopts an existing anti-interference manner. On the one hand, the server side ensures that the number of TOF cameras in each subarea is smaller than the maximum TOF cameras supported by the existing anti-interference mode by controlling the movement of the robot, and ensures that a plurality of TOF cameras in each subarea are not mutually interfered. On the other hand, by configuring different camera codes representing anti-interference parameters for adjacent subareas, the plurality of TOF cameras of the adjacent subareas are not mutually interfered. Based on the two aspects, the anti-interference of the maximum number of TOF cameras supported by the existing anti-interference method and the anti-interference of the TOF cameras between adjacent areas can be realized for each sub-area, so that the problem of interference of a large number of TOF cameras is solved.
Specifically, in the application, a motion area of a robot provided with a TOF camera is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and camera codes corresponding to adjacent sub-areas are different.
The service end predicts the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot. The server may determine whether the number of robots in the next sub-area reaches a preset threshold at the time of the cross-border. If not, before the cross-over time arrives, the server predicts the unoccupied camera code corresponding to the next sub-region at the cross-over time, and sends the predicted camera code to the robot, so that after the robot arrives at the next sub-region, the anti-interference parameters corresponding to the camera code are determined, and the work of the deployed TOF camera on the robot is controlled based on the anti-interference parameters.
Therefore, on one hand, in the application, the motion area of the robot is divided into a plurality of sub-areas, and each sub-area adopts the existing anti-interference mode. Before robots cross regions, the server side determines whether the number of robots in the next sub region reaches a preset threshold, and when the number of robots in the next sub region does not reach the preset threshold, the server side controls the robots to move to the next sub region and distributes unoccupied camera codes corresponding to the next sub region for the robots, so that the number of TOF cameras in each sub region is controlled not to exceed the maximum number of TOF cameras supported by the existing anti-interference mode, and the TOF cameras in each sub region are prevented from interfering with each other.
On the other hand, the camera codes corresponding to the adjacent subareas are different, and the different camera codes correspond to different anti-interference parameters, so that the anti-interference parameters adopted by the TOF cameras on the robots in the adjacent subareas are different, and the TOF cameras in the adjacent subareas are not interfered with each other.
Referring to fig. 1, fig. 1 is a schematic view of a networking architecture of a robot control system according to an exemplary embodiment of the present application.
In the present application, a networking architecture of a robot control system includes: a server and a plurality of robots configured with TOF cameras.
The robot can be connected with the server through a wireless network, so that the communication between the robot and the server is realized.
The server is used for managing the robots, and the server may be a central server, a data center and other devices, and is only described here by way of example and not limited in detail.
The robot refers to a movable robot, and may include: AGVs (Automatic Guided Vehicle, automated guided vehicles, also known as robots), industrial robots, consumer robots, recreational robots, unmanned aerial vehicles, and the like, are described herein by way of example only and are not specifically limited thereto.
Before introducing the robot control method provided by the application, the subarea division mode and the camera code corresponding to each subarea provided by the application are introduced.
1) Division mode of subareas
In the present application, a movement region in which a robot equipped with a TOF camera is located is divided into at least one sub-region.
For example, as shown in fig. 2, in an alternative implementation, a motion area where a robot configured with a TOF camera is located may be divided into at least one sub-area by means of cellular division, so that the divided sub-areas are arranged in a cellular manner. Of course, in practical applications, other manners may be used to divide the sub-region, such as dividing the sub-region into a plurality of rectangular blocks. The division of the sub-areas is only exemplarily described here, and is not particularly limited.
2) Size of subregion
In the application, the subareas can be divided based on a distance threshold value that any two TOF cameras do not interfere. In other words, the size of the sub-region is related to the distance threshold at which any two TOF cameras do not interfere. For example, the size of the sub-region is greater than or equal to the distance threshold where any two TOF cameras do not interfere.
The distance threshold value that any two TOF cameras do not interfere refers to the minimum distance that any two TOF cameras do not interfere. When the distance between two TOF cameras is greater than or equal to the distance threshold, due to the limitation of the transmitting power of the TOF cameras, the light transmitted by any one of the two TOF cameras does not reach the other TOF camera, so that the two TOF cameras do not interfere with each other.
In the application, when the size of the sub-region is larger than or equal to the distance threshold value that any two TOF cameras do not interfere, the TOF cameras in the two sub-regions with any interval of one sub-region can be ensured not to interfere with each other.
For example, as shown in fig. 2, one hexagon in fig. 2 is one sub-area. Assuming that no interference occurs when the distance between the two TOF cameras is at least 10 meters, the distance threshold is determined to be 10 meters.
In dividing the sub-regions, it is ensured that the diagonal of each hexagon in fig. 2 (i.e. the size of the sub-region) is greater than or equal to 10 meters.
The size of the subareas can be the length, width, diagonal line, diameter and the like of the subareas. For example, the subareas are square, and the size of the subareas can be the side length of the subareas or the diagonal line of the subareas. For another example, when the subregion is circular, the subregion may be sized to be the diameter of the subregion. The sub-region sizes are only exemplarily described herein, and are not specifically limited.
3) Camera coding for sub-regions
In the application, a concept of camera coding is provided, different camera coding corresponds to different anti-interference parameters, and a plurality of TOF cameras can realize anti-interference among the TOF cameras by using different anti-interference parameters.
Specifically, in an embodiment of the present application, at least one camera code is configured for each sub-region. In other words, each sub-region corresponds to at least one camera code. Each sub-area is different in camera codes allocated to robots in the sub-area, so that TOFs in the same sub-area can be guaranteed not to interfere with each other when the TOFs work simultaneously.
In order to prevent TOF cameras of adjacent subareas from interfering with each other, the application sets different camera codes corresponding to the adjacent subareas. When the robot moves in a subarea, the robot deploys TOF cameras to work by using anti-interference parameters corresponding to one camera code corresponding to the subarea, and as the machine codes obtained by the TOF cameras in adjacent subareas are different, the anti-interference parameters adopted by the TOF cameras in the adjacent subareas are different, so that the plurality of TOF cameras in the adjacent subareas can be ensured not to interfere with each other.
4) Structure of camera code
In an embodiment of the present application, a camera code includes: region coding and robot coding.
The region codes refer to codes corresponding to each region. In the present application, the region codes of adjacent regions are different. The region codes corresponding to the non-adjacent sub-regions may be the same or different, and are only exemplified herein, and are not specifically limited.
The robot code is a code indicating the robot in the sub-area set in advance. The number of robot codes corresponding to each sub-region is the maximum number of TOF cameras allowed by the sub-region and not subject to interference. For example, the existing anti-interference mode can ensure that at most N TOF cameras work in the subarea without interference. The robot code may be a robot code previously assigned to the N TOF cameras. The robot codes in the different sub-areas may be the same or different.
For example, in the present application, the region codes for setting the adjacent subregions are different, but the robot codes corresponding to the different subregions are the same. As shown in fig. 2, each hexagon in fig. 2 represents a sub-region, and the numbers in the hexagons represent the region encoding of the sub-region.
Assuming that the maximum number of TOF cameras allowed without interference per sub-area is 3, the robot codes for each sub-area are 01, 10 and 11.
As shown in fig. 2, assuming that the region code of the sub-region represented by the hexagon in the center of fig. 2 is 00 and the robot codes corresponding to the sub-region are 01, 10 and 11, the camera codes corresponding to the sub-region are 0001, 0010 and 0011, respectively.
Assuming that the codes of the subareas represented by the hexagons directly below the center hexagon of fig. 2 are 11, and the robot codes corresponding to the subareas are 01, 10, and 11, the camera codes corresponding to the subareas are 1101, 1110, and 1111, respectively.
It can be seen that the camera codes corresponding to the sub-region 11 and the sub-region 00 (i.e. adjacent sub-regions) can be made different by means of region coding plus robot coding.
The camera code is described herein by way of example only, and in practice, other configurations may be employed, which are described herein by way of example only and are not particularly limited thereto.
Referring to fig. 3, fig. 3 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present application. The method can be applied to the service end shown in fig. 1.
First, the timing of executing the robot control method will be described. In the application, the server may execute the robot control method when the robot approaches the region boundary.
To achieve this, in an alternative implementation, the robot detects whether the distance between the current position of the robot and the region boundary of the next region is within a preset range, and if the distance between the current position of the robot and the region boundary of the next region is within the preset range, the robot reports its current position information to the central server. And after receiving the current position information reported by the robot, the central server executes the robot control method.
In another alternative implementation, the robot may report its current position periodically. When the central server receives the current position reported by the robot, the central server can detect whether the path from the current position of the robot to the boundary of the area is within a preset range. And if the path from the current position of the robot to the boundary of the region is within a preset range, executing the anti-interference method.
The timing of executing the robot control method is described only as an example, but in practical application, the server may execute the robot control method when receiving an externally transmitted anti-interference instruction. The specific limitation is not given here.
Next, the robot control method provided by the application is described in detail.
The robot control method comprises the following steps:
step 301: the service end predicts the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot.
Step 301 is described in detail below by steps 3011 to 3013.
Step 3011: the server receives the current position reported by the robot.
When the robot is realized, the robot can report the robot information to the server periodically or when the distance from the robot to the region boundary is detected to be within a preset distance range. The robot information includes: the current position of the robot, whether the task target is reached, the current movement direction and speed of the robot and the like. The robot information is only exemplarily described here, and is not particularly limited.
The server side can extract the current position information of the robot from the robot information reported by the robot.
Step 3012: the server obtains the motion information of the robot.
In an alternative implementation, the motion information may include: the path information of the robot reaching the task target and the speed of the robot.
Specifically, after the robot accepts a task, the server may plan the path and the movement rate of the robot to the task target for the robot. For example, when the robot receives a transport task, the path and the movement rate of the robot to the transport table can be planned for the robot. For example, when the robot receives a task of charging, a path and a movement rate to the charging pile can be planned for the robot. The specific limitation is not given here.
Therefore, the server side is recorded with the robot identification and the corresponding relation between the path planned for the robot and the current task target and the movement rate. When the motion information of the robot is obtained, the server side can search the path information and the motion rate corresponding to the robot in the corresponding relation, and the searched path information and motion rate are used as the motion information of the robot.
Step 3013: the service end predicts the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot.
When the method is realized, the server side can determine the next sub-area to be reached by the robot based on the path information of the robot reaching the task target and the current position information of the robot, and determine the area boundary between the sub-area and the next sub-area.
Then, the server may determine a path of the robot from the current location to the boundary of the area based on path information of the robot to reach the task target.
Finally, the server may determine a cross-boundary time when the robot reaches the boundary of the area based on the distance and the movement rate of the robot. For example, the server may divide the distance by the movement rate to obtain a cross-boundary time when the robot reaches the boundary of the area.
For example, as shown in fig. 4, each hexagon in fig. 4 represents a sub-area, assuming that the path of the robot to reach the task target is represented by the arrowed broken line in fig. 4. As can be seen from fig. 4, the robot will span sub-area 1, sub-area 2, sub-area 3 and sub-area 4. The point B in the subarea 4 is a task target, and the point A is the current position of the robot.
In implementation, the server may determine that the current sub-area is the area 2, the next sub-area is the area 3, and the area boundary is the boundary of the area 2 and the area 3 based on the path information (i.e., the broken line with the arrow in fig. 4) and the current position information (i.e., the point a) of the robot.
The server may then determine the path (i.e., line AC in fig. 4) from the current location (i.e., point a) to the zone boundary (i.e., the boundary of zone 2 and zone 3) based on the path information of the robot to the task target (i.e., the broken line with the arrow in fig. 4).
Then, the server determines the crossing time when the robot reaches the region boundary based on the distance (i.e., line segment AC in fig. 4) and the movement rate of the robot.
Step 302: and the server determines whether the number of robots in the next subarea reaches a preset threshold value at the cross-over time.
The preset threshold value is related to the number of TOF cameras which are supported by the TOF camera working mode to be operated at the same time and do not interfere with each other. In other words, the preset threshold is related to N, and the TOF cameras can operate in a manner that supports simultaneous operation of N TOF cameras and that does not interfere with each other.
For example, assuming that the TOF cameras operate in a time division multiplexing manner, the time division multiplexing manner can maximally support 3 TOF cameras to operate simultaneously and not interfere with each other, and the preset threshold may be 3. The preset threshold value is only exemplarily described here, and is not particularly limited.
In implementing step 302, the robot information for each sub-region is maintained on the server side. Wherein the robot information includes a robot identification, and a camera code assigned to the robot.
When a robot enters a sub-area, the identification of the robot and the camera code allocated to the robot are added to the robot information corresponding to the sub-area.
When a robot leaves a sub-area, the robot identification and the camera code allocated to the robot are deleted from the robot information table corresponding to the sub-area.
Based on this, when the cross-border moment is reached, the server may count the number of robots in the next sub-area based on the robot information in the next sub-area. Then, the server may detect whether the number of robots in the next sub-area reaches a preset threshold.
Step 303: if not, the server side issues unoccupied camera codes corresponding to the next sub-region at the cross-boundary moment to the robot, so that after the robot reaches the next sub-region, anti-interference parameters corresponding to the camera codes are determined, and the deployed TOF cameras on the robot are controlled to work based on the anti-interference parameters.
When the method is realized, the corresponding relation of each sub-region identifier and the camera code corresponding to each sub-region is maintained on the server side.
Each sub-region identifies a corresponding at least one camera code. Each camera code has an occupied state and an idle state. The occupied state indicates that the camera code has been assigned to a TOF camera deployed on a robot in or about to enter the sub-area, and the idle state indicates that the camera code has not been assigned.
The camera coding state is updated continuously.
For example, when a robot moves from a first area to a second area, the server may recycle the camera code of the robot in the first area, i.e. set the camera code of the robot in the first sub-area to an idle state.
For another example, after the server allocates the camera code of the second sub-area to the robot, the server sets the camera code of the second sub-area allocated to the robot to an occupied state.
The correspondence of the sub-area identification and the camera code at a certain moment is shown in table 1, for example.
TABLE 1
When step 302 is implemented, the server may determine, from the correspondence shown in table 1, the camera code in the idle state corresponding to the next sub-area when the cross-boundary time arrives, as the unoccupied camera code corresponding to the next sub-area at the cross-boundary time. Then, the server may issue the determined camera code to the robot.
Of course, in actual application, the server may also determine and issue the camera code corresponding to the next sub-area and not occupied in the preset time period before the cross-border time arrives. For example, the server may search for the camera code in the idle state corresponding to the next sub-area in the corresponding relation shown in table 1 in a preset time period before the cross-over time arrives, and use the camera code as the unoccupied camera code corresponding to the next sub-area at the cross-over time, and issue the camera code to the robot. The timing of "issuing the unoccupied camera code corresponding to the next sub-region at the cross-border time to the robot" is merely exemplarily described, and is not particularly limited.
In addition, in the embodiment of the application, if the number of robots in the next sub-area reaches the preset threshold value at the cross-boundary moment, the number of robots which can be carried by the next sub-area is indicated to be exceeded if the next sub-area reenters the new robot, so that the TOF cameras deployed by the robots in the next sub-area are mutually interfered. Therefore, in this case, the server side can control the robot to stop moving. Then, the server monitors the number of robots in the next sub-area in real time, when the number of robots in the next sub-area is monitored to be lower than the preset threshold value, the robots are triggered to move, unoccupied camera codes corresponding to the next sub-area are distributed to the robots, so that after the robots reach the next sub-area, anti-interference parameters corresponding to the camera codes are determined, and the work of deployed TOF cameras on the robot is controlled based on the anti-interference parameters.
In addition, in the embodiment of the application, when the server detects that the robot enters the next sub-area, the camera code of the robot in the sub-area is recovered.
In an alternative implementation, when the robot moves to the region boundary, the robot sends a notification that the region boundary is reached to the server, and after the server receives the notification again, the server can determine that the robot enters the next sub-region, at this time, the server can recycle the camera code of the robot in the region.
In an alternative recycling manner, the server may set the camera code of the robot in the area to be in an idle state. The recovery method is described here by way of example only, and is not particularly limited.
Referring to fig. 5, fig. 5 is a flowchart illustrating a robot control method according to an exemplary embodiment of the present application, which is applicable to a robot, and may include the steps of:
step 501: the method comprises the steps that a robot reports current position information of the robot to a server, so that the server predicts the crossing moment when the robot reaches the region boundary of a sub-region and a next sub-region based on the reported current position information and the acquired motion information of the robot, and predicts and transmits unoccupied camera codes corresponding to the next sub-region at the crossing moment when the number of the robots in the next sub-region is determined to not reach a preset threshold at the crossing moment.
In an alternative implementation manner, the robot detects whether the distance between the current position of the robot and the area boundary of the next area is within a preset range, and if the distance between the current position of the robot and the area boundary of the next area is within the preset range, the robot reports the current position information of the robot to the central server. In this way, the center server executes the above-described robot control method after receiving the current position information reported by the robot.
In another alternative implementation, the robot may report its current position periodically. In this way, the central server, upon receiving the current position reported by the robot, may detect whether the path from the current position of the robot to the boundary of the area is within a preset range. And if the path from the current position of the robot to the boundary of the region is within a preset range, executing the robot control method.
The reporting period can be preset, and when the reporting period is set to be small enough, the robot can report the current position information to the central server in a near real-time mode.
Step 502: and the robot receives the camera code issued by the server.
Step 503: and the robot determines the anti-interference parameters corresponding to the camera codes and controls the TOF camera on the robot to work based on the anti-interference parameters after reaching the next sub-area.
Several ways of implementing step 503 are described below.
The first way is: the TOF camera on the robot works in a time division multiplexed manner. The time division multiplexing method refers to: different TOF cameras emit light at different moments, so that the light emitted by the different TOF cameras is distinguished, and interference resistance is realized.
When the working mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter is luminous time delay. Because the time division multiplexing method is used for accurately controlling the light emitting time of each TOF camera, the center server is synchronous with the clocks of the TOF cameras deployed on each robot, and the clocks of the TOF cameras deployed on the center server and each robot are ensured to be consistent.
In implementing step 503, the robot may determine that the camera code assigned to the present robot corresponds to a light emission time delay. Wherein different camera codes correspond to different light emission delays.
Then, the robot can send the luminous time delay to a TOF camera deployed on the robot, and the TOF camera emits light according to the luminous time delay. For example, the TOF camera emits light periodically, and the TOF camera emits light after the light emission time delay is delayed when reaching the light emission time.
For example, as shown in fig. 4, region 1 and region 2 are adjacent. In the area 1, there are two robots, robot 1 and robot 2, respectively, the camera codes assigned to the robot 1 are 0100, respectively, and the camera codes assigned to the robot 2 are 0111. In the area 2 there is a robot, robot 3, and the camera code assigned to the mobile machine 3 is 1011.
Suppose that the light emission time delay corresponding to 0100 is 5s, the light emission time delay corresponding to 0111 is 10s, and the light emission time delay corresponding to 1011 is 15s. Assuming that the TOF cameras on the three robots all emit light every 30 seconds, under the condition of no time delay, the TOF cameras on the three robots all emit light at 0s, 30s, 60s and the like, and the three TOF cameras emit light at the same time, so that interference can be caused.
In the application, the TOF camera corresponding to 0100 can emit light at the light emitting moments of 5s,35s,65s and the like. The TOF camera corresponding to 0111 can emit light at the light emitting moments of 10s,40s,70s and the like. 1011, the corresponding TOF camera can emit light at the light emission moments of 15s,45s,75s, etc. Therefore, each TOF camera emits light at different time through different light-emitting time delays, so that the interference problem of a plurality of TOF cameras is solved.
The second way is: the TOF camera on the robot works in a frequency division multiplexing mode. The frequency division multiplexing method refers to: different TOF cameras modulate light to be emitted with different modulation frequencies and emit the modulated light. Since the modulation frequencies of the light emitted by the respective TOF cameras are different, each TOF camera can identify whether the received light is reflected light of the light emitted by itself or light emitted by other TOF cameras based on the modulation frequency of the light, thereby overcoming the interference problem between the TOF cameras.
When the working mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter is a modulation frequency.
In implementing step 302, the robot may determine a modulation frequency corresponding to the camera code assigned by the robot. Wherein, the modulation frequencies corresponding to different camera codes are different.
The robot may then send the determined modulation frequency to a TOF camera deployed on the robot such that the TOF camera sends the frequency modulated light. Specifically, before emitting light, the TOF camera may modulate the light to be generated with the determined modulation frequency and transmit the light modulated by the modulation frequency.
The third way is: the TOF camera on the robot works in a code division multiplexing mode. The code division multiplexing method refers to: different TOF cameras encode laser pulses to be transmitted by adopting different code words, and transmit the encoded laser pulses. Because the codes of the laser pulses emitted by the TOF cameras are different, the TOF cameras can distinguish whether locally received light is reflected light emitted by the TOF cameras or light emitted by other TOF cameras based on the codes of the laser pulses, so that the interference problem of a plurality of TOF cameras can be solved.
When the working mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameters may be encoded for the camera.
Upon implementing step 302, the robot may send the camera code to the TOF camera on the present robot. And the TOF camera codes the camera code as a code word of the laser pulse code, codes the laser pulse to be transmitted according to the code word, and transmits the coded laser pulse.
As can be seen from the above description, the motion area of the robot is divided into a plurality of sub-areas, and each sub-area adopts the existing anti-interference mode. On the one hand, the server side ensures that the number of TOF cameras in each subarea is smaller than the maximum TOF cameras supported by the existing anti-interference mode by controlling the movement of the robot, and ensures that a plurality of TOF cameras in each subarea are not mutually interfered. On the other hand, by configuring different camera codes for the adjacent subareas, the plurality of TOF cameras of the adjacent subareas are not mutually interfered. Based on the two aspects, the anti-interference of the maximum number of TOF cameras supported by the existing anti-interference method and the anti-interference of the TOF cameras between adjacent areas can be realized for each sub-area, so that the problem of interference of a large number of TOF cameras is solved.
Referring to fig. 6, fig. 6 is a hardware configuration diagram of an electronic device according to an exemplary embodiment of the present application;
the electronic device includes: a communication interface 601, a processor 602, a machine-readable storage medium 603, and a bus 604; wherein the communication interface 601, the processor 602 and the machine-readable storage medium 603 perform communication with each other via a bus 604. The processor 602 may perform the robot control method described above by reading and executing machine-executable instructions corresponding to the robot control logic in the machine-readable storage medium 603.
The machine-readable storage medium 603 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: volatile memory, nonvolatile memory, or similar storage medium. In particular, the machine-readable storage medium 603 may be RAM (Radom Access Memory, random access memory), flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, DVD, etc.), or a similar storage medium, or a combination thereof.
The electronic device may be the server or the robot, and the electronic device is only described here by way of example and is not particularly limited.
Referring to fig. 7, fig. 7 is a block diagram of a robot control device according to an exemplary embodiment of the present application.
The device is applied to a server, a motion area where a robot provided with a TOF camera is located is divided into at least one sub-area, each sub-area corresponds to at least one preset camera code, and camera codes corresponding to adjacent sub-areas are different; each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
a prediction unit 701, configured to predict a cross-boundary time when the robot reaches an area boundary of a present sub-area and a next sub-area based on current position information reported by the robot and acquired motion information of the robot;
a determining unit 702, configured to determine whether the number of robots in the next sub-area reaches a preset threshold at the time of the crossing;
and the issuing unit 703 is configured to issue an unoccupied camera code corresponding to a next sub-area at the cross-boundary time to the robot if not, so that the robot determines an anti-interference parameter corresponding to the camera code after reaching the next sub-area, and controls the deployed TOF camera on the robot to work based on the anti-interference parameter.
Optionally, the issuing unit 703 is further configured to control the robot to stop moving if the issuing unit is yes; and triggering the robot to move when the number of robots in the next sub-area is monitored to be lower than the preset threshold value, distributing unoccupied camera codes corresponding to the next sub-area for the robot, determining anti-interference parameters corresponding to the camera codes and matched with a locally configured anti-interference method after the robot reaches the next sub-area, and controlling the deployed TOF cameras on the robot to work based on the anti-interference parameters.
Optionally, the issuing unit 703 is further configured to, when detecting that the robot enters the next sub-area, recycle the camera code of the robot in the sub-area.
Optionally, the size of the sub-region is related to a distance threshold at which any two TOF cameras do not interfere;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
In the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
Optionally, the divided sub-areas are arranged in a honeycomb shape.
Optionally, the prediction unit 701 is further configured to execute the robot control method when the current position information reported by the robot is received if the current position information is reported by the robot when determining that the distance from the current position to the boundary of the area is within a preset range; or if the current position information is periodically reported by the robot, executing the robot control method when determining that the distance from the current position to the region boundary of the robot is in a preset range.
Optionally, the motion information includes: the path information of the robot reaching the task target and the motion rate of the robot;
the prediction unit 701 is configured to determine a next sub-area to be reached by the robot based on path information of the robot reaching a task target, the current position information, and the motion area division condition when predicting a boundary time of the robot reaching an area boundary of a current sub-area and the next sub-area based on the current position information reported by the robot and the obtained motion information of the robot, and determine an area boundary of the current sub-area and the next sub-area; determining a path from a current position to the region boundary based on path information of the robot reaching a task target and the current position information; and determining the crossing moment of the robot reaching the region boundary based on the distance and the movement rate of the robot.
Referring to fig. 8, fig. 8 is a block diagram of another robot control device according to an exemplary embodiment of the present application.
The device is applied to a robot configured with a TOF camera; the movement area of the robot is divided into at least one sub-area; each sub-region corresponds to at least one preset camera code, and the camera codes corresponding to adjacent sub-regions are different; each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
a reporting unit 801, configured to report current position information of the present robot to a server, so that the server predicts a crossing time when the present robot reaches an area boundary between the present sub-area and a next sub-area based on the reported current position information and the obtained motion information of the present robot, and when it is determined that the number of robots in the next sub-area does not reach a preset threshold at the crossing time, the server predicts and issues an unoccupied camera code corresponding to the next sub-area at the crossing time;
a receiving unit 802, configured to receive a camera code sent by the server;
and the determining unit 803 is used for determining the anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area.
Optionally, in the case that the operation mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameter includes a light emission time delay; the server is synchronous with the TOF camera clocks deployed on the robots;
the determining unit 803 is configured to determine a light emission time delay corresponding to the camera code when determining an anti-interference parameter corresponding to the camera code and controlling a TOF camera on the robot to operate based on the anti-interference parameter after reaching a next sub-area; wherein, the luminous time delay corresponding to different camera codes is different; and sending the light-emitting time delay control to a TOF camera deployed on the robot, so that the TOF camera emits light according to the light-emitting time delay.
Optionally, in the case that the operation mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter includes a modulation frequency;
the determining unit 803 is configured to determine a modulation frequency corresponding to the camera code when determining an anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to operate based on the anti-interference parameter after reaching a next sub-area; wherein, the modulation frequencies corresponding to different camera codes are different; and sending the modulation frequency to a TOF camera deployed by the robot, so that the TOF camera sends light modulated by the modulation frequency.
Optionally, in the case that the working mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code;
the determining unit 803 is configured to, when determining an anti-interference parameter corresponding to the camera code and matching with a locally configured anti-interference method, and controlling a TOF camera on the robot to work based on the anti-interference parameter, send the camera code to the TOF camera on the robot, so that the TOF camera uses the camera code as a codeword of a laser pulse code, encodes a laser pulse to be emitted according to the codeword, and sends the encoded laser pulse.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (13)

1. The robot control method is characterized in that the method is applied to a server, a motion area of a robot provided with TOF cameras is divided into a plurality of subareas, each subarea adopts an existing anti-interference mode, each subarea corresponds to at least one preset camera code, and the number of the TOF cameras in each subarea is smaller than the maximum number of the TOF cameras supported by the existing anti-interference mode, so that a plurality of TOF cameras in each subarea are not mutually interfered; the adjacent subareas are configured with different camera codes representing anti-interference parameters so that a plurality of TOF cameras of the adjacent subareas do not interfere with each other Each sub-region having a different camera code assigned to a robot within the sub-region, the method comprising:
predicting the crossing moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot;
Determining whether the number of robots in the next subarea reaches a preset threshold value at the cross-over moment; the preset threshold value is smaller than or equal to the number of TOF cameras which are supported by the maximum working mode of the TOF cameras and work simultaneously and do not interfere with each other;
if not, issuing an unoccupied camera code corresponding to the next sub-region at the cross-boundary moment to the robot, so that the robot determines an anti-interference parameter corresponding to the camera code after reaching the next sub-region, and controls the deployed TOF camera on the robot to work based on the anti-interference parameter;
the size of the subarea is larger than or equal to a distance threshold value at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
in the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
2. The method according to claim 1, wherein the method further comprises:
If yes, controlling the robot to stop moving;
and triggering the robot to move when the number of robots in the next sub-area is monitored to be lower than the preset threshold value, distributing unoccupied camera codes corresponding to the next sub-area for the robot, determining anti-interference parameters corresponding to the camera codes and matched with a locally configured anti-interference method after the robot reaches the next sub-area, and controlling the deployed TOF cameras on the robot to work based on the anti-interference parameters.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and when the robot is detected to drive into the next sub-area, recovering the camera code of the robot in the sub-area.
4. The method of claim 1, wherein the divided sub-areas are arranged in a honeycomb pattern.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
if the current position information is reported by the robot when the distance from the current position to the region boundary of the robot is determined to be in a preset range, executing the robot control method when the current position information reported by the robot is received;
Or alternatively, the process may be performed,
and if the current position information is periodically reported by the robot, executing the robot control method when determining that the distance from the current position to the region boundary of the robot is in a preset range.
6. The method of claim 1, wherein the motion information comprises: the path information of the robot reaching the task target and the motion rate of the robot;
the predicting the cross-boundary moment of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot comprises the following steps:
determining a next sub-area to be reached by the robot based on path information of the robot reaching a task target, the current position information and the dividing condition of the motion area, and determining an area boundary between the current sub-area and the next sub-area;
determining a path from a current position to the region boundary based on path information of the robot reaching a task target and the current position information;
and determining the crossing moment of the robot reaching the region boundary based on the distance and the movement rate of the robot.
7. A robot control method, characterized in that the method is applied to a robot equipped with a TOF camera; the movement area of the robot is divided into a plurality of subareas; each sub-area adopts an existing anti-interference mode, each sub-area corresponds to at least one preset camera code, and the number of TOF cameras in each sub-area is smaller than the maximum number of TOF cameras supported by the existing anti-interference mode, so that a plurality of TOF cameras in each sub-area are not mutually interfered; the adjacent subareas are configured with different camera codes representing anti-interference parameters so that a plurality of TOF cameras of the adjacent subareas do not interfere with each other Each subarea is allocated camera formation for robots in the subareaThe codes are different, and the method comprises the following steps:
the method comprises the steps that current position information of a robot is reported to a server side, so that the server side predicts the crossing moment when the robot reaches the region boundary of a sub-region and a next sub-region based on the reported current position information and the acquired motion information of the robot, and when the number of robots in the next sub-region does not reach a preset threshold value at the crossing moment, the server side predicts and sends an unoccupied camera code corresponding to the next sub-region at the crossing moment; the preset threshold value is smaller than or equal to the number of TOF cameras which are supported by the maximum working mode of the TOF cameras and work simultaneously and do not interfere with each other;
Receiving a camera code issued by the server;
determining an anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area;
the size of the subarea is larger than or equal to a distance threshold value at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
in the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
under the condition that the working mode of the TOF camera on the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay; the server is synchronous with the TOF camera clocks deployed on the robots;
the determining the anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area comprises the following steps:
Determining the luminous time delay corresponding to the camera code; wherein, the luminous time delay corresponding to different camera codes is different;
and sending the light-emitting time delay control to a TOF camera deployed on the robot, so that the TOF camera emits light according to the light-emitting time delay.
9. The method of claim 7, wherein the step of determining the position of the probe is performed,
in the case that the working mode of the TOF camera on the robot is a frequency division mode, the anti-interference parameter comprises a modulation frequency;
the determining the anti-interference parameter corresponding to the camera code, and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area comprises the following steps:
determining a modulation frequency corresponding to the camera code; wherein, the modulation frequencies corresponding to different camera codes are different;
and sending the modulation frequency to a TOF camera deployed by the robot, so that the TOF camera sends light modulated by the modulation frequency.
10. The method of claim 7, wherein the step of determining the position of the probe is performed,
under the condition that the working mode of the TOF camera on the robot is a code division multiplexing mode, the anti-interference parameters comprise camera codes;
the determining an anti-interference parameter corresponding to the camera code and matched with a locally configured anti-interference method, and controlling the TOF camera on the robot to work based on the anti-interference parameter comprises the following steps:
And sending the camera code to a TOF camera on the robot, so that the TOF camera takes the camera code as a code word of laser pulse code, codes laser pulses to be emitted according to the code word, and sends the coded laser pulses.
11. RobotThe control device is characterized in that the device is applied to a server, a motion area where a robot provided with TOF cameras is located is divided into a plurality of subareas, each subarea adopts an existing anti-interference mode, each subarea corresponds to at least one preset camera code, and the number of the TOF cameras in each subarea is smaller than the maximum number of the TOF cameras supported by the existing anti-interference mode, so that a plurality of TOF cameras in each subarea are not mutually interfered; the adjacent subareas are configured with different camera codes representing anti-interference parameters so that a plurality of TOF cameras of the adjacent subareas do not interfere with each other Each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
the prediction unit is used for predicting the crossing time of the robot reaching the region boundary of the current sub-region and the next sub-region based on the current position information reported by the robot and the acquired motion information of the robot;
A determining unit, configured to determine whether the number of robots in the next sub-area reaches a preset threshold at the cross-boundary time; the preset threshold value is smaller than or equal to the number of TOF cameras which are supported by the maximum working mode of the TOF cameras and work simultaneously and do not interfere with each other;
the issuing unit is used for issuing an unoccupied camera code corresponding to the next sub-area at the cross-boundary moment to the robot if not, so that the robot determines an anti-interference parameter corresponding to the camera code after reaching the next sub-area, and controls the deployed TOF camera on the robot to work based on the anti-interference parameter;
the size of the subarea is larger than or equal to a distance threshold value at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
in the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
12. A robot control device, characterized in that the device is applied to a robot equipped with a TOF camera; the movement area of the robot is divided into a plurality of subareas; the sub-areas adopt the existing anti-interference mode, each sub-area corresponds to at least one preset camera code, and the number of TOF cameras in each sub-area is smaller than the maximum number of TOF cameras supported by the existing anti-interference mode, so that a plurality of TOF cameras in each sub-area are not mutually interfered; the adjacent subareas are configured with different camera codes representing anti-interference parameters so that a plurality of TOF cameras of the adjacent subareas do not interfere with each other Each sub-zone is different in assigned camera code for robots within the zone, the apparatus comprising:
the server side predicts the crossing time of the robot reaching the region boundary of the sub-region and the next sub-region based on the reported current position information and the acquired motion information of the robot, and predicts and transmits the unoccupied camera codes corresponding to the next sub-region at the crossing time when determining that the number of the robots in the next sub-region does not reach the preset threshold at the crossing time; the preset threshold value is smaller than or equal to the number of TOF cameras which are supported by the maximum working mode of the TOF cameras and work simultaneously and do not interfere with each other;
The receiving unit is used for receiving the camera code issued by the server;
the determining unit is used for determining an anti-interference parameter corresponding to the camera code and controlling the TOF camera on the robot to work based on the anti-interference parameter after reaching the next sub-area;
the size of the subarea is larger than or equal to a distance threshold value at which no interference occurs between any two TOF cameras;
when the working mode of the TOF camera deployed by the robot is a time division multiplexing mode, the anti-interference parameters comprise luminous time delay;
when the working mode of the TOF camera deployed by the robot is a frequency division multiplexing mode, the anti-interference parameters comprise modulation frequency;
in the case that the working mode of the TOF camera deployed by the robot is a code division multiplexing mode, the anti-interference parameter includes a camera code.
13. An electronic device comprising a readable storage medium and a processor;
wherein the readable storage medium is for storing machine executable instructions;
the processor being configured to read the machine executable instructions on the readable storage medium and execute the instructions to implement the steps of the method of any one of claims 1-10.
CN202210325814.7A 2022-03-29 2022-03-29 Robot control method, device and equipment Active CN114643580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210325814.7A CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210325814.7A CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Publications (2)

Publication Number Publication Date
CN114643580A CN114643580A (en) 2022-06-21
CN114643580B true CN114643580B (en) 2023-10-27

Family

ID=81995523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210325814.7A Active CN114643580B (en) 2022-03-29 2022-03-29 Robot control method, device and equipment

Country Status (1)

Country Link
CN (1) CN114643580B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638692A (en) * 2011-01-31 2012-08-15 微软公司 Reducing interference between multiple infra-red depth cameras
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN108718453A (en) * 2018-06-15 2018-10-30 合肥工业大学 A kind of subregion network-building method under highly dense WLAN scenes
CN109459738A (en) * 2018-06-06 2019-03-12 杭州艾芯智能科技有限公司 A kind of more TOF cameras mutually avoid the method and system of interference

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3690473A1 (en) * 2019-02-01 2020-08-05 Terabee S.A.S. A spatial sensor synchronization system using a time-division multiple access communication system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638692A (en) * 2011-01-31 2012-08-15 微软公司 Reducing interference between multiple infra-red depth cameras
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN106683130A (en) * 2015-11-11 2017-05-17 杭州海康威视数字技术股份有限公司 Depth image acquisition method and device
CN109459738A (en) * 2018-06-06 2019-03-12 杭州艾芯智能科技有限公司 A kind of more TOF cameras mutually avoid the method and system of interference
CN108718453A (en) * 2018-06-15 2018-10-30 合肥工业大学 A kind of subregion network-building method under highly dense WLAN scenes

Also Published As

Publication number Publication date
CN114643580A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US11397442B2 (en) Travel planning system, travel planning method, and non-transitory computer readable medium
US20210382493A1 (en) Spatiotemporal Robotic Navigation
EP3499334B1 (en) Multi-sensor safe path system for autonomous vehicles
US10046458B2 (en) System of confining robot movement actions and a method thereof
US11860621B2 (en) Travel control device, travel control method, travel control system and computer program
EP2287694B1 (en) Distributed visual guidance for a mobile robotic device
JP7008238B2 (en) How to control the parking lot vehicle driving control system and the parking lot vehicle driving control system
KR102279122B1 (en) Scooter scheduling method, apparatus, recording medium and electronic apparatus
CN108476064B (en) System and method for targeted data communication
US20150065146A1 (en) Communication with a mobile virtual base station
US10687259B2 (en) Communications with a mobile virtual base station
KR20100064039A (en) Method for monitoring location of a construction machinery
CN106257561B (en) Parking lot sensor, control method thereof and parking system
US20210286373A1 (en) Travel control device, travel control method and computer program
CN112071110A (en) Autonomous parking method, apparatus, system, computer device and storage medium
CN114643580B (en) Robot control method, device and equipment
KR101517150B1 (en) Apparatus, method and system for detecting objects using hand-over between antennas of radar device
CN111127915A (en) Emergency vehicle multi-intersection absolute priority control method and device and storage medium
CN110261813B (en) Positioning control method and device, electronic equipment and storage medium
CN108108850A (en) A kind of telecontrol equipment and its pathfinding control method and the device with store function
EP3679438B1 (en) Indoor positioning system for mobile objects
CN109683556B (en) Cooperative work control method and device for self-moving equipment and storage medium
CN102502281B (en) Stacker as well as control method and device thereof
US20060069470A1 (en) Bi-directional absolute automated tracking system for material handling
CN116507985A (en) Autonomous driving control of a mining vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant