WO2022017296A1 - Robot motion information recognition method, obstacle avoidance method, robot capable of obstacle avoidance, and obstacle avoidance system - Google Patents

Robot motion information recognition method, obstacle avoidance method, robot capable of obstacle avoidance, and obstacle avoidance system Download PDF

Info

Publication number
WO2022017296A1
WO2022017296A1 PCT/CN2021/106897 CN2021106897W WO2022017296A1 WO 2022017296 A1 WO2022017296 A1 WO 2022017296A1 CN 2021106897 W CN2021106897 W CN 2021106897W WO 2022017296 A1 WO2022017296 A1 WO 2022017296A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
projection pattern
projection
obstacle avoidance
motion information
Prior art date
Application number
PCT/CN2021/106897
Other languages
French (fr)
Chinese (zh)
Inventor
钟扬
Original Assignee
炬星科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 炬星科技(深圳)有限公司 filed Critical 炬星科技(深圳)有限公司
Publication of WO2022017296A1 publication Critical patent/WO2022017296A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones

Definitions

  • the invention belongs to the technical field of robot obstacle avoidance, and particularly relates to a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system.
  • AMR Autonomous Mobile Robot
  • obstacles include fixed obstacles such as walls and shelves, as well as moving obstacles such as pedestrians.
  • AMR uses lidar to identify the outline of objects to avoid obstacles. When it detects that the obstacle is close to itself within a certain distance, it triggers obstacle avoidance. The longer the obstacle avoidance distance is, the safer it is but the more efficiency is lost.
  • robots are also high-speed moving objects in the scene and are one of the main objects for obstacle avoidance. Since robots move faster than pedestrians and not as flexible as pedestrians, there is a greater chance of collisions between robots. For this reason, it is necessary to reduce the running speed of the robot and increase the obstacle avoidance distance, which leads to the loss of efficiency. At the same time, in some scenarios that are prone to collision due to occlusion, such as narrow intersections (one robot cannot detect robots on the other side), the robot has a higher chance of colliding.
  • the present invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system, so as to solve the problems of collision and crowding easily caused by the multi-robot cooperative movement in the prior art, and make the robot more convenient when moving.
  • Flexible and safe avoid collisions and blockages in complex scenarios such as narrow spaces or intersections, and improve the efficiency of multi-robot collaboration.
  • the present invention provides a method for identifying motion information of a robot, including:
  • the present invention also provides a multi-robot motion obstacle avoidance method, comprising: the first robot projects a first projection pattern carrying its own motion information on the ground along its travel direction;
  • the second robot captures the first projection pattern through a vision device installed on its body;
  • the second robot recognizes the motion information of the first robot corresponding to the first projection pattern, and performs corresponding obstacle avoidance measures in combination with its own motion information to avoid collision with the first robot.
  • the present invention also provides a robot that can dynamically avoid obstacles, including:
  • the main body of the robot can be moved autonomously;
  • the projection device which is relatively fixedly arranged on the robot body;
  • the projection device includes an electronic display and a plurality of projection films for the robot to project different projection patterns;
  • a vision device which is installed on the robot body on the same side as the projection device, and is used to capture the projection pattern of the robot's travel area.
  • the present invention also provides a multi-robot obstacle avoidance system, including:
  • a projection module in response to the autonomous movement of the robot, generating different projection patterns in the travel path of the robot, the projection patterns carrying the movement information of the robot;
  • a vision module for capturing the projected pattern in the travel path of the robot
  • the processing module is used for identifying the motion information of the robot corresponding to the projection pattern, and combining the motion information of the other robot to control the other robot to perform corresponding obstacle avoidance measures.
  • the invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system.
  • a first projection pattern carrying the motion information of the robot is generated, the first projection pattern is visually captured, and the motion information of the robot carried by the first projection pattern is recognized.
  • the motion information of the robot is transmitted through the projection pattern, and the visual capture projection pattern is used for identification processing, and obstacle avoidance planning is further adopted, so that the robot can obtain the driving information of other robots on the route before the lidar recognizes other robots.
  • Pre-evasion can improve safety and reduce the probability of corner collision on the one hand; on the other hand, it can improve the running speed and increase efficiency on the basis of safety.
  • FIG. 1 is a flowchart of an embodiment of a method for recognizing motion information of a robot according to the present invention.
  • FIG. 2 is a flowchart of another embodiment of the motion information identification method of the present invention.
  • FIG. 3 is a flowchart of an embodiment of the multi-robot dynamic obstacle avoidance method of the present invention.
  • FIG. 4 is a flowchart of another embodiment based on the embodiment of FIG. 3 .
  • FIG. 5 is a flowchart of another embodiment of the multi-robot dynamic obstacle avoidance method of the present invention.
  • FIG. 6 is a schematic structural diagram of a device of the obstacle avoidance robot of the present invention.
  • FIG. 7 is a schematic diagram of another device structure of the obstacle avoidance robot of the present invention.
  • FIG. 8 is a schematic diagram of projection and visual capture of the obstacle avoidance robot of the present invention.
  • FIG. 9 is a schematic diagram of the obstacle avoidance method of the obstacle avoidance robot of the present invention.
  • FIG. 10 is a schematic block diagram of the robot obstacle avoidance system of the present invention.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for recognizing motion information of a robot of the present invention; a method for recognizing motion information of a robot of the present invention may be implemented as steps S10 - S30 described below.
  • Step S10 generating a first projection pattern carrying the motion information of the robot on the current travel path of the robot.
  • the working scene of the robot is generally a large area, such as inside various types of warehouses.
  • the robot When the robot receives a task, it will go to the task destination point according to the task prompt, and the robot will go from the starting point to the destination point according to the map.
  • a rough motion path is generated, the robot travels along the path, and a first projection pattern is generated on the ground in front of the robot.
  • the first projection pattern follows the real-time motion of the robot and corresponds to the current motion state of the robot.
  • the preset distances between the first projection pattern and the robot are different distances such as 2 meters, 5 meters, and 10 meters. It can be understood that the greater the movement speed of the robot, the longer the projection distance.
  • the motion information of the robot includes the robot's own code, speed, driving direction, distance from the projected pattern, etc. These information can be represented by different projected patterns.
  • the projected pattern is a pattern that the robot can recognize its semantics, such as QR (Quick Response) code II. dimensional code form.
  • several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern.
  • the above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
  • a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model.
  • the projection pattern is divided according to the number of robot motion attributes.
  • the projection pattern is composed of four parts: robot number, direction, speed and projection distance.
  • the orientation of each part in the projection pattern is preset, and a small image of a fixed size is used.
  • Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
  • Step S20 visually capturing the first projection pattern.
  • the first projection pattern is captured visually, that is, a camera, a visual sensor and other devices are used for snapshot or real-time monitoring to obtain the image information of the first projection. It can be understood that the generated first projection pattern is The visual device can easily capture the image presentation form. Further, the visual device can be set to capture a preset pattern type and then upload, store and other processing. If a conventional ground image is detected, no further processing is performed.
  • the visual capture of the first projection pattern may be that the first robot itself captures the first projection pattern, or the second robot captures the first projection pattern, or other visual objects in the working environment. device to capture.
  • the first robot captures the first projection pattern with its own motion state, the first robot can do nothing by default or confirm whether the projection pattern is displayed normally on its travel path.
  • the second robot refers to any robot except the robot corresponding to the information carried by the first projection pattern, and the second robot recognizes the first projection pattern, Obtain the motion information of the first robot, and then understand the motion speed, motion direction, and position of the first robot from the first projection pattern, and determine the approximate position where it meets the first robot based on its own motion speed and direction, and Calculate the collision probability, if the collision probability is greater than the preset value, take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
  • obstacle avoidance measures such as deceleration, acceleration, stop, steering and other actions.
  • Step S30 identifying the robot motion information carried by the first projection pattern.
  • a processing device may be set up in the working environment to identify the first projection pattern and perform further processing. Schedule robots in this area.
  • a processing device is provided on the robot body, and the processing device can be used alone as a sub-processing system of the robot, or the processing device can be integrated or embedded in the robot central processing system in the form of a processing module for identification.
  • step S10 further includes:
  • Step S11 the first robot projects the first projection pattern on the ground along its travel direction or/and the first projection pattern is projected by an external projection device in the working environment of the first robot.
  • the first robot refers to any robot in the robot cluster in the same working environment or in the same space and on the same map.
  • each robot will form its own path plan. It travels along its own planned path, and on the path it travels, that is, right in front of the robot's walking direction, a first projection pattern is generated, that is to say, the generation position of the first projection pattern is the position the robot is about to reach, and the first projection pattern is generated.
  • the projection pattern moves with the first robot, and the distance between the first projection pattern and the robot can be preset to be different distances such as 2 meters, 5 meters, and 10 meters.
  • the first projection pattern is generated by the projection of the first robot itself, and the generated first projection pattern carries the motion information of the first robot.
  • the motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information can be It is represented by different projection patterns, and the projection pattern is a pattern that the robot can recognize its semantics, such as the form of QR (Quick Response) code.
  • several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern.
  • the above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
  • a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model.
  • the projection pattern is divided according to the number of robot motion attributes.
  • the projection pattern is composed of four parts: robot number, direction, speed and projection distance.
  • the orientation of each part in the projection pattern is preset, and a small image of a fixed size is used.
  • Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
  • an external projection device in the first robotic work environment projects the first projection pattern.
  • the external projection device performs the projection operation on some preset road sections, which are intersections or areas where robots move frequently.
  • the projection device is further provided with a detection device for detecting a robot about to enter the projection road section, and projecting the detected robot motion information to a preset projection area to generate a first projection pattern.
  • the projection device is provided with a communication device for receiving motion information sent by a robot about to enter the projection area, and projecting a projection pattern corresponding to the information to a preset projection area. It can be understood that the projection device can also perform dynamic projection following the movement direction of the trolley in the projection section, that is, the projection pattern and the trolley move in unison.
  • the first projection pattern only contains two pieces of robot motion information: the movement direction and the arrival time.
  • the arrival time is The robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection pattern generated at this moment after 3s.
  • set the movement direction to be straight , left, right
  • the arrival time is 3s, 4s, 5s
  • the basic projection pattern has 3, 4, 5s straight, 3, 4, 5s to the left, 3, 4, 5s to the right, a total of 9 types.
  • 9 kinds of projection pattern numbers 01-09 when the robot is about to enter the projection area, it only needs to send the number between 01-09 to the projection device, and the projection device will project the corresponding projection pattern to the projection section.
  • step S20 further includes:
  • Step S21 the second robot captures the first projection pattern through a vision device installed on its body; or/and a vision device installed in the robot operating environment to capture the first projection pattern.
  • the second robot refers to any other robot except the robot corresponding to the information carried by the first projection pattern.
  • a vision device is installed on the second robot body, and the vision device is a camera, a vision module, or a vision sensor, and is used for snapshot or real-time monitoring to obtain the image information of the first projection.
  • the generated first projection pattern is an image presentation form that the visual device can easily capture. Further, the visual device can be set to capture the preset pattern type and then upload, store and other processing, or preset the second robot to enter. Go to the projection area and turn on the vision device to capture and feed back the image to the robot processing system in real time.
  • a vision device installed in the robotic work environment captures the first projected pattern. Further, the vision device captures the projection pattern in the working environment of the robot, and sends the captured projection pattern to the processing module of the robot, and the processing module of the robot recognizes the projection pattern information, makes judgments, and takes obstacle avoidance measures.
  • the equipment can be scattered at the intersection, and a visual device is only responsible for capturing the projection pattern of one intersection, and only communicates with the robot in the intersection section, and the other second robot in the intersection section receives the first projection pattern and transmits it. Identify and process, and take corresponding obstacle avoidance measures in combination with its own motion state.
  • the vision device captures the projection patterns of the entire robot operating environment, that is, all projection patterns on the robot walking map, and feeds back all projection pattern information to the robot management system platform for processing and real-time monitoring by the platform.
  • the robot's motion status is analyzed, and the robot's motion in high-risk road sections is analyzed, and obstacle avoidance instructions are given to robots with a higher collision risk.
  • step S30 further includes:
  • Step 31 The processing device installed on the robot receives the first projection pattern, and identifies the motion information of the first robot corresponding to the projection pattern; or/and the processing device installed in the robot operating environment receives the first projection pattern.
  • the processing device on the robot receives the first projection pattern, and identifies motion information of the first robot corresponding to the first projection pattern.
  • the processing device is separately installed on the robot body, as an independent sub-processing system, used to complete the dynamic obstacle avoidance of the robot. It can be understood that the processing device can also be set as a processing module integrated on the robot central processing unit.
  • the second robot recognizes the first projection pattern through the processing device, obtains the movement information of the first robot, and then understands the movement speed, movement direction of the first robot, and the position away from the first projection pattern, combined with its own movement speed and direction, Determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
  • the processing device installed in the robot working environment receives the first projection pattern, and identifies the motion information of the first robot corresponding to the first projection pattern.
  • the processing device is arranged in one or more projection areas, and the projection areas are some preset road sections, and the road sections are intersections or areas where robots move frequently.
  • the processing device is provided with a communication device for receiving the projection pattern image information sent by the vision device in the projection area, identifying the motion information of the robot, and calculating the collision probability in combination with the motion state information of other robots in the projection area, such as the collision probability. If it is greater than the preset value, it will instruct the obstacle avoidance measures, such as sending deceleration, acceleration, stop, steering and other action commands to the corresponding other robots in the projection area.
  • the processing device may also be a server for uniformly processing the projection pattern information of all projection areas in a warehouse and making obstacle avoidance planning.
  • FIG. 3 is a schematic flowchart of a multi-robot motion obstacle avoidance method according to the present invention.
  • a multi-robot motion obstacle avoidance method of the present invention may be implemented as steps S100-S300 described below.
  • Step S100 the first robot projects a first projection pattern carrying its own motion information on the ground along its travel direction;
  • Step S200 the second robot captures the first projection pattern through a visual device installed on its body
  • Step S300 the second robot recognizes the motion information of the first robot corresponding to the first projection pattern, and performs corresponding obstacle avoidance measures in combination with its own motion information to avoid collision with the first robot.
  • the first robot 100 refers to any robot in the robot cluster in the same work environment or in the same space and in the same map.
  • each robot The robot will form its own path plan.
  • the robot travels along its own planned path.
  • the first projection pattern moves with the first robot.
  • the distance between the first projection pattern and the robot can be preset as 2 meters, 5 meters, 10 meters and other different distances.
  • the first projection pattern is projected and generated by the first robot 100 itself, and the generated first projection pattern carries the motion information of the first robot.
  • the motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information It can be represented by different projection patterns.
  • the projection pattern is a pattern that the robot can recognize its semantics, such as the form of QR (Quick Response) code.
  • several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern.
  • the above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
  • a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model.
  • the projection pattern is divided according to the number of robot motion attributes.
  • the projection pattern is composed of four parts: robot number, direction, speed and projection distance.
  • the orientation of each part in the projection pattern is preset, and a small image of a fixed size is used.
  • Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
  • the second robot 200 refers to any other robot except the robot corresponding to the information carried by the first projection pattern.
  • a vision device is installed on the body of the second robot 200 , and the vision device is a camera, a vision module, or a vision sensor, and is used for snapshot or real-time monitoring to obtain the image information of the first projection.
  • the generated first projection pattern is an image presentation form that can be easily captured by the visual device.
  • the visual device can be set to capture a preset pattern type and then perform processing such as uploading, storing, etc., or preset the second robot 200. Enter the projection area and turn on the vision device to capture and feed back the image to the robot processing system in real time.
  • the processing device on the second robot 200 receives the first projection pattern, and identifies motion information of the first robot corresponding to the first projection pattern.
  • the processing device is separately installed on the robot body, as an independent sub-processing system, used to complete the dynamic obstacle avoidance of the robot. It can be understood that the processing device can also be set as a processing module integrated on the robot central processing unit.
  • the second robot recognizes the first projection pattern through the processing device, obtains the movement information of the first robot, and then understands the movement speed, movement direction of the first robot, and the position away from the first projection pattern, combined with its own movement speed and direction, Determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
  • FIG. 4 is another embodiment of the multi-robot obstacle avoidance method of the present invention, and step S300 further includes:
  • Step S310 the second robot 200 is based on the optimal mutual collision avoidance algorithm, according to the current position and speed of the first robot 100 or/and the position and speed of the first robot 100 in the first projection pattern, Perform obstacle avoidance planning and change its speed direction; or the second robot 200 calculates the first time when the first robot 100 reaches the first projection pattern, and changes its movement speed according to a preset time interval to make It reaches the first projection pattern earlier or later by the time interval than the first time.
  • the robots perform obstacle avoidance planning according to the current position and speed of the first robot 100 or the position and speed of the first robot 100 in the first projection pattern , change its speed and direction.
  • the optimal mutual collision avoidance algorithm is an ORCA (Optimal Reciprocal Collision Avoidance) obstacle avoidance algorithm to navigate the projection area.
  • ORCA Optimal Reciprocal Collision Avoidance
  • the second robot 200 obtains the speed and current position of the first robot 100 by identifying the projection pattern information, and the first robot 100 also obtains the speed and current position of the second robot 200 by identifying the projection pattern information. Adopt the same obstacle avoidance strategy and jointly select a new speed to avoid obstacles.
  • the second robot 200 calculates the first time when the first robot 100 arrives at the first projection pattern, and changes its movement speed according to a preset time interval to make it advance or delay the first time by a certain amount.
  • the time interval reaches the first projection pattern.
  • the preset time interval is 1 s, 2 s or 3 s, etc.
  • the second robot 200 determines which robot reaches the projection at the moment first without changing the motion state of the second robot 200 in combination with its own motion information. pattern, if the first robot 100 reaches the projection pattern first or at the same time, then control itself to decelerate until it reaches the projection pattern after a preset time delay, such as 2s, than the first robot 100.
  • the first robot 100 can maintain its own motion state to perform operations.
  • the main road is a road section with few robots or a road section that performs secondary tasks.
  • the robot on the main road is the first robot 100, which can only project the projection pattern without capturing the projection pattern.
  • the robot on the secondary road is the second robot 200, which only captures the projection pattern.
  • the pattern does not project a projected pattern, in other words, the first robot 100 drives on the main road, the second robot 200 drives on the secondary road, the second robot 200 actively avoids obstacles, and the first robot 100 can maintain its own desired speed. , to complete more tasks.
  • FIG. 5 is a schematic flowchart of another implementation manner of a multi-robot motion obstacle avoidance method.
  • the second robot identifies the motion information of the first robot corresponding to the first projection pattern, and performs corresponding motion information in combination with its own motion information. obstacle avoidance measures to avoid collision with the first robot", and then step S400 is also executed.
  • Step S400 the second robot projects a second projection pattern on the ground along its travel direction after the obstacle avoidance measure.
  • the second projection pattern represents the motion information of the second robot 200 after making the obstacle avoidance plan. For example, after the second robot 200 recognizes the projection pattern of the first robot 100 and combines its own motion state, it is calculated to obtain There is a high possibility of collision, and obstacle avoidance measures need to be taken to reduce the original speed of 2m/s to 0.5m/s. When the speed reduction is completed, the second robot 200 is on the current travel path, that is, the travel after obstacle avoidance planning. path to generate a second projection pattern.
  • the projection pattern is not updated in real time according to the motion state of the robot, that is, during the process of decreasing from 2m/s to 0.5m/s, the projection pattern remains unchanged, and the projection is updated only after the obstacle avoidance planning action is completed.
  • pattern that is, changing from the first projection pattern to the second projection pattern; on the other hand, when the robot encounters a static obstacle, it may temporarily change its motion state, such as slowing down and passing through. In this static obstacle avoidance process, although The change of decelerating first and then returning to the original speed is also completed, but the projection pattern may not be changed to keep the first projection pattern.
  • the number of times of transformation of the projection pattern can be reduced, and at the same time, the pattern of the projection pattern will not be too many, the difficulty of identification processing is reduced, the rapid identification and operation of the robot is facilitated, and it is easier to meet the performance requirements of the projection device and improve the practicability. .
  • several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern.
  • the above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
  • a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model.
  • the projection pattern is divided according to the number of robot motion attributes.
  • the projection pattern is composed of four parts: robot number, direction, speed and projection distance.
  • the orientation of each part in the projection pattern is preset, and a small image of a fixed size is used.
  • Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
  • the present embodiment discloses a device structure of an obstacle avoidance robot, and the robot includes:
  • the robot body 90 can be moved autonomously, and the robot body 90 has complete functions of an autonomous mobile robot such as AMR (Autonomous Mobile Robot).
  • AMR Autonomous Mobile Robot
  • the projection device 60 is relatively fixedly arranged on the robot body; the projection device 60 includes an electronic display and several projection films for the robot to project different projection patterns.
  • the vision device 70 which is installed on the robot body on the same side as the projection device 60, is used to capture the projection pattern of the robot's travel area.
  • the projection device 60 can automatically switch the projection films to project different projection patterns, and each projection pattern represents different robot motion information.
  • the projection device 60 includes a plurality of projection films. The projection angle of one or more projection light sources to switch the use of slides.
  • the projector includes a laser light source 1R of various colors, a laser light source 1G, a laser light source 1B, a collimator lens, and a lenticular lens. lens), spatial modulation elements, projection lenses and dichroic mirrors.
  • the laser light sources 1R, 1G, and 1B emit red, blue, and green laser light in this order.
  • the green laser beam becomes substantially parallel light, which is reflected by the mirror and then transmitted through the dichroic mirror.
  • the blue laser beam passes through a collimator lens to become substantially parallel light, and then passes through a dichroic mirror to combine with the green laser beam.
  • the red laser beam passes through the collimator lens and becomes substantially parallel light, it is combined with the green laser beam and the blue laser beam through a dichroic mirror.
  • the multiplexed laser light passes through the multi-lens to become diffused light, and enters the spatial modulation element.
  • the spatial modulation element modulates the incident light based on the periodic main image signal.
  • the projection lens projects the light modulated by the spatial modulation element on the ground, and gives the pattern color identification.
  • the visual device further includes an image correction controller, and the visual device captures the image displayed by the light projected by the projection lens. After the image correction controller is processed, it is converted into a machine-recognizable image or an image identifier that the robot has trained and learned in advance, and the processed projection pattern is sent to the robot in the nearby area.
  • the robot when the robot needs to work and starts to move, the robot sends an on message to the projection device.
  • the opening message format is shown as follows: id, angel, time, shift.
  • a message includes four fields, namely serial number, angle, time and switching bit.
  • the serial number is used for identification, and the serial number of each message command is different, so as to distinguish different commands and control the corresponding robot and its projection device.
  • the angle is used to indicate the projection angle of the projection device, and the projection angle is changed according to the movement speed of the robot.
  • the robot's own vision device 70 captures the projection pattern projected by its own projection device 60 , that is, when the first robot 100 captures the first projection pattern carrying its own motion state
  • the first robot 100 You can do nothing by default, or check whether the projection pattern is displayed normally on its travel path. If it cannot capture its own projection pattern, it will give an abnormal feedback.
  • the robot further includes a processing device 80 for recognizing the projection pattern and controlling the robot to perform corresponding obstacle avoidance measures.
  • a processing device 80 can be separately provided on the robot body to recognize the first projection pattern and perform further processing, such as recognizing projection pattern information, acquiring robot motion information corresponding to the projection pattern, and combining with its own robot The motion state is used for obstacle avoidance processing.
  • the processing device 80 only needs to be installed on the second robot body that needs to perform obstacle avoidance measures. According to the working space of the robot and the frequency of the road sections traveled by the robot, the path traveled by the robot is divided into main roads and secondary roads. The main roads are the roads that robots frequently pass through or perform important tasks.
  • the road section or the road section where the secondary task is performed the robot on the main road is the first robot 100, which can only project the projected pattern without capturing the projected pattern, and the robot on the secondary road is the second robot 200, which only captures the projected pattern without projecting the projected pattern,
  • the first robot 100 drives on the main road
  • the second robot 200 drives on the secondary road
  • the second robot 200 actively avoids obstacles
  • the first robot 100 can keep its desired speed to complete more tasks .
  • the processing device 80 may be integrated or embedded in the robot central processing system in the form of a processing module, for identifying the first projection pattern, and combining its own motion information to perform obstacle avoidance control.
  • the second robot recognizes the first projection pattern through the processing module, obtains the movement information of the first robot, and then learns the movement speed, movement direction, and position of the first robot from the first projection pattern, combined with its own movement speed, direction, determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
  • the first robot 100 travels at a right angle relative to the second robot 200, and the second robot 200 cannot observe the body of the first robot 100 due to the wall, and also cannot predict the first robot 100.
  • the obstacle avoidance action is not triggered until the first robot 100 and the second robot 200 move within a relatively short distance and their lidars scan the outline of each other. At this time, the avoidance effect is poor, and it is possible that the avoidance is not timely and a collision occurs.
  • the second robot 200 projects a two-dimensional code pattern containing the semantics of "Robot JX02, going straight, speed 1.5m/s, distance 6m" on the ground in front of the second robot 200 .
  • the lidar of the first robot 100 has not yet scanned the outline of the second robot 200
  • the vision device of the first robot has captured the two-dimensional code of the second robot, and recognizes it according to the two-dimensional code "Robot JX02, is going straight, The speed is 1.5m/s and the distance is 6m”.
  • the semantics is calculated, the decision is made, and the avoidance action is taken in advance.
  • the avoidance effect in this embodiment is good, especially when the robot is running at a high speed, it can obtain other robots earlier. Dynamic information, get more reaction time to make corresponding obstacle avoidance actions.
  • the first robot 100 is a robot that travels laterally, and the first robot 100 is provided with a projection device that projects a two-dimensional code pattern on the travel path.
  • the second robot 200 is a robot that travels longitudinally, and the second robot 200 is provided with a vision device to capture the two-dimensional code pattern on the travel path.
  • only the second robot 200 needs to perform obstacle avoidance actions , in order to avoid the first robot 100, the first robot 100 does not do any evasive action, and is only responsible for projecting its own motion information on the traveling ground, that is, only transmitting information, and the second robot 200 only receives information and makes decisions.
  • the obstacle avoidance system includes:
  • the projection module 600 in response to the autonomous movement of the robot, generates different projection patterns in the travel path of the robot, and the projection patterns carry the motion information of the robot;
  • a vision module 700 configured to capture the projected pattern in the travel path of the robot
  • the processing module 800 is configured to identify the motion information of the robot corresponding to the projection pattern, and control the other robot to perform corresponding obstacle avoidance measures in combination with the motion information of the other robot.
  • the projection module 600 performs real-time projection along with the movement of the robot, and corresponds to the current movement state of the robot.
  • the preset distances between the first projection pattern and the robot are different distances such as 2 meters, 5 meters, and 10 meters. It can be understood that the greater the movement speed of the robot, the longer the projection distance.
  • the motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information can be represented by different projection patterns.
  • the projection pattern is a pattern that the robot can recognize its semantics, such as QR (Quick Response) code II. dimensional code form.
  • the projection module 600 can project different projection patterns, and each projection pattern corresponds to different robot motion information.
  • several preset motion states that the robot likes can be used as the basic projection pattern, for example, a speed value is taken from the four stages of low speed, medium and low speed, medium speed, and high speed, and it is matched with turning left, turning right, going straight, and stopping. After waiting for the normal turning, take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern.
  • the distance and speed are integrated into a time attribute, that is, the time interval for the robot to keep its motion state unchanged and reach the first projection pattern, such as 3s, 5s, etc., that is to say, the robot will reach the first projection pattern generated at this moment after 3s.
  • the above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
  • the vision module 700 is used to capture the projection pattern within the visual range.
  • the vision module 700 can be arranged on the robot, and along with the movement of the robot, dynamically capture the projection pattern on the movement path of the robot, or it can be relatively fixedly arranged on the robot In the working environment, the projection pattern of some areas is monitored in real time.
  • the processing module 700 can be set separately to identify the first projection pattern and perform further processing. For example, after identifying the projection pattern information and performing unified computing processing on the robot in the area similar to the projection pattern, then the area robot for scheduling.
  • the processing module 800 is provided on the robot body, or the processing module 800 is integrated or embedded in the robot central processing system, for recognizing the first projection pattern and further integrating the processing.
  • the second robot recognizes the first projection pattern through the processing module 800, obtains the motion information of the first robot, and then understands the movement speed, the movement direction, and the position from the first projection pattern of the first robot, combined with its own movement speed , direction, determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
  • the invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system.
  • a first projection pattern carrying the motion information of the robot is generated, the first projection pattern is visually captured, and the motion information of the robot carried by the first projection pattern is recognized.
  • the motion information of the robot is transmitted through the projection pattern, and the visual capture projection pattern is used for identification processing, and obstacle avoidance planning is further adopted, so that the robot can obtain the driving information of other robots on the route before the lidar recognizes other robots.
  • Pre-evasion can improve safety and reduce the probability of corner collision on the one hand; on the other hand, it can improve the running speed and increase efficiency on the basis of safety. Therefore, it has industrial applicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A robot motion information recognition method, an obstacle avoidance method, a robot capable of obstacle avoidance, and an obstacle avoidance system. The method comprises: on a current travel path of a robot, generating a first projection pattern carrying robot motion information; visually capturing the first projection pattern; and recognizing the robot motion information carried by the first projection pattern. By transmitting robot motion information by means of a projection pattern, visually capturing the projection pattern for recognition, and further performing obstacle avoidance planning, the robot is allowed to obtain travel-related information of other robots on the route before the lidar recognizes other robots, so as to avoid the same in advance; one the one hand, safety can be improved and the probability of collisions at corners can be reduced; on the other hand, operating speed can be increased while improving the safety, and efficiency can be improved.

Description

一种机器人运动信息识别方法、避障方法及避障机器人、避障系统A robot motion information recognition method, obstacle avoidance method, obstacle avoidance robot, and obstacle avoidance system 技术领域technical field
本发明属于机器人避障技术领域,尤其涉及一种机器人运动信息识别方法、避障方法及避障机器人、避障系统。The invention belongs to the technical field of robot obstacle avoidance, and particularly relates to a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system.
背景技术Background technique
在AMR(自主移动机器人)运行的过程中,需要识别障碍然后触发避障,包括停止、绕行等动作,以避开障碍防止碰撞。这些障碍包含固定的障碍例如墙壁、货架,也包含移动的障碍,比如行人。普遍来说,AMR通过激光雷达识别物体轮廓来进行避障,在探测到障碍物与自身接近到一定距离内时,即触发避障。避障距离越长越安全但损失的效率越多。During the operation of AMR (Autonomous Mobile Robot), it is necessary to identify obstacles and then trigger obstacle avoidance, including actions such as stopping and detouring, to avoid obstacles and prevent collisions. These obstacles include fixed obstacles such as walls and shelves, as well as moving obstacles such as pedestrians. Generally speaking, AMR uses lidar to identify the outline of objects to avoid obstacles. When it detects that the obstacle is close to itself within a certain distance, it triggers obstacle avoidance. The longer the obstacle avoidance distance is, the safer it is but the more efficiency is lost.
其他机器人也是场景中高速运动的物体,是避障的主要对象之一。由于机器人移动速度高于行人且不如行人移动灵活,因此机器人之间发生碰撞的几率较大。为此,不得不降低机器人的运行速度、增大避障距离,而导致效率的损失。同时,在一些由于遮挡而易发生碰撞的场景,例如狭窄路口(一边机器人无法探测到另一边的机器人),机器人有更大几率发生碰撞。Other robots are also high-speed moving objects in the scene and are one of the main objects for obstacle avoidance. Since robots move faster than pedestrians and not as flexible as pedestrians, there is a greater chance of collisions between robots. For this reason, it is necessary to reduce the running speed of the robot and increase the obstacle avoidance distance, which leads to the loss of efficiency. At the same time, in some scenarios that are prone to collision due to occlusion, such as narrow intersections (one robot cannot detect robots on the other side), the robot has a higher chance of colliding.
技术问题technical problem
本发明提供一种机器人运动信息识别方法、避障方法及避障机器人、避障系统,以解决现有技术中的多机器人协作运动时容易造成碰撞、拥挤的问题,使机器人在移动作业时更加灵活、安全,避免在狭窄的空间或交叉路口等较复杂的场景中造成碰撞、堵塞,提高多机器人协作的效率。The present invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system, so as to solve the problems of collision and crowding easily caused by the multi-robot cooperative movement in the prior art, and make the robot more convenient when moving. Flexible and safe, avoid collisions and blockages in complex scenarios such as narrow spaces or intersections, and improve the efficiency of multi-robot collaboration.
技术解决方案technical solutions
第一方面,本发明提供了一种机器人运动信息识别方法,包括:In a first aspect, the present invention provides a method for identifying motion information of a robot, including:
在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案;On the current travel path of the robot, a first projection pattern carrying the motion information of the robot is generated;
视觉捕捉所述第一投影图案;visually capturing the first projected pattern;
识别所述第一投影图案携带的所述机器人运动信息。Identify the robot motion information carried by the first projection pattern.
第二方面,本发明还提供了一种多机器人运动避障方法,包括:第一机器人沿其行进方向的地面上投射出携带其自身运动信息的第一投影图案;In a second aspect, the present invention also provides a multi-robot motion obstacle avoidance method, comprising: the first robot projects a first projection pattern carrying its own motion information on the ground along its travel direction;
第二机器人通过安装在其本体上的视觉装置捕捉所述第一投影图案;The second robot captures the first projection pattern through a vision device installed on its body;
所述第二机器人识别所述第一投影图案对应的所述第一机器人的运动信息,并结合自身的运动信息进行相应的避障措施,以避免与所述第一机器人碰撞。The second robot recognizes the motion information of the first robot corresponding to the first projection pattern, and performs corresponding obstacle avoidance measures in combination with its own motion information to avoid collision with the first robot.
第三方面,本发明还提供了一种可动态避障的机器人,包括:In a third aspect, the present invention also provides a robot that can dynamically avoid obstacles, including:
可自主移动机器人主体;The main body of the robot can be moved autonomously;
投影装置,所述投影装置相对固定设置在所述机器人主体上;所述投影装置包括电子显示和若干投影片,用于所述机器人投射出不同的投影图案;a projection device, which is relatively fixedly arranged on the robot body; the projection device includes an electronic display and a plurality of projection films for the robot to project different projection patterns;
视觉装置,所述视觉装置安装在与所述投影装置同一侧的所述机器人主体上,用于捕捉所述机器人行进区域的投影图案。A vision device, which is installed on the robot body on the same side as the projection device, and is used to capture the projection pattern of the robot's travel area.
第四方面,本发明还提供了一种多机器人避障系统,包括:In a fourth aspect, the present invention also provides a multi-robot obstacle avoidance system, including:
若干机器人,所述若干机器人在同一作业环境中自主运动;a number of robots that move autonomously in the same work environment;
投影模块,响应于所述机器人自主运动,在所述机器人的行进路径中生成不同的投影图案,所述投影图案携带有所述机器人的运动信息;a projection module, in response to the autonomous movement of the robot, generating different projection patterns in the travel path of the robot, the projection patterns carrying the movement information of the robot;
视觉模块,用于捕捉所述机器人行进路径中的所述投影图案;a vision module for capturing the projected pattern in the travel path of the robot;
处理模块,用于识别所述投影图案相对应的机器人运动信息,并结合另一机器人的运动信息,控制所述另一机器人进行相应的避障措施。The processing module is used for identifying the motion information of the robot corresponding to the projection pattern, and combining the motion information of the other robot to control the other robot to perform corresponding obstacle avoidance measures.
有益效果beneficial effect
本发明提供一种机器人运动信息识别方法、避障方法及避障机器人、避障系统。在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案,视觉捕捉所述第一投影图案,识别所述第一投影图案携带的所述机器人运动信息。其中,通过投影图案传递机器人运动信息,并配合视觉捕捉投影图案进行识别处理,进一步采取避障规划,使得机器人能在激光雷达识别到其他机器人之前就能够获得路线上其他机器人的行驶相关信息,从而进行预先规避,一方面可以提升安全性,减小拐角碰撞的几率;另一方面可以在更安全的基础上提升运行速度,增加效率。The invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system. On the current travel path of the robot, a first projection pattern carrying the motion information of the robot is generated, the first projection pattern is visually captured, and the motion information of the robot carried by the first projection pattern is recognized. Among them, the motion information of the robot is transmitted through the projection pattern, and the visual capture projection pattern is used for identification processing, and obstacle avoidance planning is further adopted, so that the robot can obtain the driving information of other robots on the route before the lidar recognizes other robots. Pre-evasion can improve safety and reduce the probability of corner collision on the one hand; on the other hand, it can improve the running speed and increase efficiency on the basis of safety.
附图说明Description of drawings
图1 是本发明机器人运动信息识别方法的一种实施例的流程图。FIG. 1 is a flowchart of an embodiment of a method for recognizing motion information of a robot according to the present invention.
图2 是本发明运动信息识别方法的另一种实施例的流程图。FIG. 2 is a flowchart of another embodiment of the motion information identification method of the present invention.
图3 是本发明多机器人动态避障方法的一种实施例的流程图。FIG. 3 is a flowchart of an embodiment of the multi-robot dynamic obstacle avoidance method of the present invention.
图4 是基于图3实施例的另一种实施例的流程图。FIG. 4 is a flowchart of another embodiment based on the embodiment of FIG. 3 .
图5 是本发明多机器人动态避障方法的又一种实施例的流程图。FIG. 5 is a flowchart of another embodiment of the multi-robot dynamic obstacle avoidance method of the present invention.
图6 是本发明避障机器人的一种装置结构示意图。FIG. 6 is a schematic structural diagram of a device of the obstacle avoidance robot of the present invention.
图7 是本发明避障机器人的另一种装置结构示意图。FIG. 7 is a schematic diagram of another device structure of the obstacle avoidance robot of the present invention.
图8 是本发明避障机器人投影和视觉捕捉示意图。FIG. 8 is a schematic diagram of projection and visual capture of the obstacle avoidance robot of the present invention.
图9 是本发明避障机器人进行避障方法的示意图。FIG. 9 is a schematic diagram of the obstacle avoidance method of the obstacle avoidance robot of the present invention.
图10 是本发明机器人避障系统的模块示意图。FIG. 10 is a schematic block diagram of the robot obstacle avoidance system of the present invention.
本发明的实施方式Embodiments of the present invention
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。下面所描述的本公开不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In order to make the objectives, technical solutions and advantages of the present disclosure more clear, the present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention. The technical features involved in the different embodiments of the present disclosure described below can be combined with each other as long as there is no conflict with each other.
如图1所示,图1是本发明一种机器人运动信息识别方法的一种实施方式的流程示意图;本发明一种机器人运动信息识别方法可实施为如下描述的步骤S10-S30。As shown in FIG. 1 , FIG. 1 is a schematic flowchart of an embodiment of a method for recognizing motion information of a robot of the present invention; a method for recognizing motion information of a robot of the present invention may be implemented as steps S10 - S30 described below.
步骤S10,在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案。Step S10, generating a first projection pattern carrying the motion information of the robot on the current travel path of the robot.
在本实施例中,机器人的工作场景一般为一片较大的区域,如各类型的仓库内部,机器人在接受到任务时,就会按照任务提示前往任务目的地点,机器人根据地图从出发点到目的地点生成大致的运动路径,机器人沿着该路径行进,在该机器人行进的前方地面上生成有第一投影图案,该第一投影图案跟随着机器人实时运动,且与该机器人当前的运动状态相对应。具体地,预设第一投影图案与机器人的距离为2米,5米,10米等不同距离,可以理解的是,机器人的运动速度越大,其投影距离越远。机器人的运动信息包括机器人自身编码、速度、行驶方向、与投影图案距离等,这些信息可以通过不同的投影图案进行表示,投影图案为机器人可识别其语义的图案,例如QR(Quick Response)code二维码的形式。In this embodiment, the working scene of the robot is generally a large area, such as inside various types of warehouses. When the robot receives a task, it will go to the task destination point according to the task prompt, and the robot will go from the starting point to the destination point according to the map. A rough motion path is generated, the robot travels along the path, and a first projection pattern is generated on the ground in front of the robot. The first projection pattern follows the real-time motion of the robot and corresponds to the current motion state of the robot. Specifically, the preset distances between the first projection pattern and the robot are different distances such as 2 meters, 5 meters, and 10 meters. It can be understood that the greater the movement speed of the robot, the longer the projection distance. The motion information of the robot includes the robot's own code, speed, driving direction, distance from the projected pattern, etc. These information can be represented by different projected patterns. The projected pattern is a pattern that the robot can recognize its semantics, such as QR (Quick Response) code II. dimensional code form.
在本实施例中,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如从低速、中低速、中速、高速4个阶段分别取一个速度值,配合左转、右转、直行、停止等常规转向,再取几个投影距离值如2米、3米、5米、10米等,分别从以上属性中抽取数值,加上机器人自身编码组成基本投影图案,进一步地,也可以将投影距离与速度整合成一个时间属性,即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处。将以上基本投影图案事先给机器人学习训练,使得在实际作业场景中,机器人能够快速识别投影图案,并作出相应规划措施,进行避障。In this embodiment, several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern. The above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
其中,投影图案识别方法,可以使用预先准备的监督数据而通过机器训练来生成识别模型,并且通过使用所获得的识别模型来进行图像识别。进一步地,根据机器人运动属性的数量分割投影图案,如投影图案由机器人编号、方向、速度和投影距离四部分拼合而成,预设各部分的在投影图案中的方位,使用固定大小的小图像作为输入并输出小图像中的投影识别码的回归器的机器训练。然后,再将识别对象图像根据对应的预设分割成子区域之后,将各个子区域调整为固定大小以生成小图像,并且由训练后的回归器获得各个小图像中的运动属性数值。 Among them, in the projection pattern recognition method, a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model. Further, the projection pattern is divided according to the number of robot motion attributes. For example, the projection pattern is composed of four parts: robot number, direction, speed and projection distance. The orientation of each part in the projection pattern is preset, and a small image of a fixed size is used. Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
步骤S20, 视觉捕捉所述第一投影图案。Step S20, visually capturing the first projection pattern.
在该实施例中,通过视觉的方式捕捉第一投影图案,即用摄像头、视觉传感器等装置进行抓拍或实时监控,获取第一投影的图像信息,可以理解的是,生成的第一投影图案为视觉装置容易捕捉到图像呈现形式,进一步地,视觉装置可以设置成捕捉到预设图案类型再进行上传、存储等处理,如检测到常规的地面图像则不做进一步处理。In this embodiment, the first projection pattern is captured visually, that is, a camera, a visual sensor and other devices are used for snapshot or real-time monitoring to obtain the image information of the first projection. It can be understood that the generated first projection pattern is The visual device can easily capture the image presentation form. Further, the visual device can be set to capture a preset pattern type and then upload, store and other processing. If a conventional ground image is detected, no further processing is performed.
可以理解的是,在该实施例中,视觉捕捉第一投影图案,可以是第一机器人自身捕捉第一投影图案,或是第二机器人捕捉第一投影图案,又或是作业环境中的其他视觉设备进行捕捉。当第一机器人捕捉到携带有自身的运动状态的第一投影图案,第一机器人可以默认不做处理或确认在其行进路径上是否正常显示投影图案,如果捕捉不到自身的投影图案,则进行异常反馈;当第二机器人捕捉到第一投影图案时,该实施例中第二机器人指除第一投影图案携带信息所对应的机器人外的其他任意机器人,第二机器人通过识别第一投影图案,得到第一机器人的运动信息,进而了解第一机器人的运动速度,运动方向,以及距离第一投影图案的位置,结合自身的运动速度、方向,判断其自身与第一机器人相遇的大致位置,并计算碰撞概率,如碰撞概率大于预设值,则进行避障措施,如减速、加速、停止、转向等动作。It can be understood that, in this embodiment, the visual capture of the first projection pattern may be that the first robot itself captures the first projection pattern, or the second robot captures the first projection pattern, or other visual objects in the working environment. device to capture. When the first robot captures the first projection pattern with its own motion state, the first robot can do nothing by default or confirm whether the projection pattern is displayed normally on its travel path. Abnormal feedback; when the second robot captures the first projection pattern, in this embodiment, the second robot refers to any robot except the robot corresponding to the information carried by the first projection pattern, and the second robot recognizes the first projection pattern, Obtain the motion information of the first robot, and then understand the motion speed, motion direction, and position of the first robot from the first projection pattern, and determine the approximate position where it meets the first robot based on its own motion speed and direction, and Calculate the collision probability, if the collision probability is greater than the preset value, take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
步骤S30,识别所述第一投影图案携带的所述机器人运动信息。Step S30, identifying the robot motion information carried by the first projection pattern.
在该实施例中,可以在作业环境中设置一个处理装置,用来识别第一投影图案并做进一步处理,如识别投影图案信息并对该投影图案相近区域的机器人进行统一的计算处理后,再对该区域的机器人进行调度。另一实施例中,在机器人本体上设置处理装置,该处理装置可以单独作为机器人的一个子处理系统,亦可以将处理装置以处理模块的形式集成或嵌入到机器人中央处理系统中,用于识别第一投影图案,并进行统一的整合处理、控制。In this embodiment, a processing device may be set up in the working environment to identify the first projection pattern and perform further processing. Schedule robots in this area. In another embodiment, a processing device is provided on the robot body, and the processing device can be used alone as a sub-processing system of the robot, or the processing device can be integrated or embedded in the robot central processing system in the form of a processing module for identification. The first projection pattern, and unified integration processing and control.
基于图1所示实施例的描述,结合图2所示,步骤S10进一步包括:Based on the description of the embodiment shown in FIG. 1 and in conjunction with FIG. 2 , step S10 further includes:
步骤S11,第一机器人沿其行进方向的地面上投射出所述第一投影图案或/和第一机器人作业环境中的外部投影设备投射所述第一投影图案。Step S11 , the first robot projects the first projection pattern on the ground along its travel direction or/and the first projection pattern is projected by an external projection device in the working environment of the first robot.
在该实施例中,第一机器人是指在同一作业环境中或同一空间、同一地图中的机器人集群的任意一个机器人,当机器人接到任务时,每个机器人都会形成自身的路径规划,机器人沿着自已的规划路径行进,在其行进的路径上,即机器人行走方向的正前方,生成有第一投影图案,也就是说该第一投影图案的生成位置是机器人即将会到达的位置,第一投影图案跟随着第一机器人移动,可以预设第一投影图案与机器人的距离为2米,5米,10米等不同距离。In this embodiment, the first robot refers to any robot in the robot cluster in the same working environment or in the same space and on the same map. When the robot receives a task, each robot will form its own path plan. It travels along its own planned path, and on the path it travels, that is, right in front of the robot's walking direction, a first projection pattern is generated, that is to say, the generation position of the first projection pattern is the position the robot is about to reach, and the first projection pattern is generated. The projection pattern moves with the first robot, and the distance between the first projection pattern and the robot can be preset to be different distances such as 2 meters, 5 meters, and 10 meters.
第一投影图案由第一机器人自身投射生成,生成的第一投影图案携带有第一机器人的运动信息,机器人的运动信息包括机器人自身编码、速度、行驶方向、与投影图案距离等,这些信息可以通过不同的投影图案进行表示,投影图案为机器人可识别其语义的图案,例如QR(Quick Response)code二维码的形式。The first projection pattern is generated by the projection of the first robot itself, and the generated first projection pattern carries the motion information of the first robot. The motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information can be It is represented by different projection patterns, and the projection pattern is a pattern that the robot can recognize its semantics, such as the form of QR (Quick Response) code.
在本实施例中,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如从低速、中低速、中速、高速4个阶段分别取一个速度值,配合左转、右转、直行、停止等常规转向,再取几个投影距离值如2米、3米、5米、10米等,分别从以上属性中抽取数值,加上机器人自身编码组成基本投影图案,进一步地,也可以将投影距离与速度整合成一个时间属性,即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处。将以上基本投影图案事先给机器人学习训练,使得在实际作业场景中,机器人能够快速识别投影图案,并作出相应规划措施,进行避障。In this embodiment, several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern. The above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
其中,投影图案识别方法,可以使用预先准备的监督数据而通过机器训练来生成识别模型,并且通过使用所获得的识别模型来进行图像识别。进一步地,根据机器人运动属性的数量分割投影图案,如投影图案由机器人编号、方向、速度和投影距离四部分拼合而成,预设各部分的在投影图案中的方位,使用固定大小的小图像作为输入并输出小图像中的投影识别码的回归器的机器训练。然后,再将识别对象图像根据对应的预设分割成子区域之后,将各个子区域调整为固定大小以生成小图像,并且由训练后的回归器获得各个小图像中的运动属性数值。 Among them, in the projection pattern recognition method, a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model. Further, the projection pattern is divided according to the number of robot motion attributes. For example, the projection pattern is composed of four parts: robot number, direction, speed and projection distance. The orientation of each part in the projection pattern is preset, and a small image of a fixed size is used. Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
另一实施例中,第一机器人作业环境中的外部投影设备投射第一投影图案。优选的,外部投影设备在一些预设路段进行投影操作,该路段为交叉路口或机器人运动频繁区域。进一步地,投影设备还设置有检测装置,用于检测即将进入投影路段的机器人,并将检测到机器人运动信息投射到预设的投影区域,生成第一投影图案。In another embodiment, an external projection device in the first robotic work environment projects the first projection pattern. Preferably, the external projection device performs the projection operation on some preset road sections, which are intersections or areas where robots move frequently. Further, the projection device is further provided with a detection device for detecting a robot about to enter the projection road section, and projecting the detected robot motion information to a preset projection area to generate a first projection pattern.
可选地,投影设备设置有通信装置,用于接收即将进入投影区域的机器人发送的运动信息,并将该信息对应的投影图案投射到预设的投影区间。可以理解的是,投影设备也可以在投影区间段跟随小车的的运动方向进行动态投影,即投影图案与小车一致运动。Optionally, the projection device is provided with a communication device for receiving motion information sent by a robot about to enter the projection area, and projecting a projection pattern corresponding to the information to a preset projection area. It can be understood that the projection device can also perform dynamic projection following the movement direction of the trolley in the projection section, that is, the projection pattern and the trolley move in unison.
在该实施例中,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如在一个实施例中,第一投影图案只包含运动方向、到达时间两个机器人运动信息,到达时间即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处,在该例中,设置运动方向有直行、向左、向右,到达时间有3s、4s、5s,则基本投影图案就有直行3、4、5s,向左3、4、5s,向右3、4、5s 共9种,进一步对9种投影图案编号01-09,则机器人在即将进入投影区域时,只需发送01-09之间的编号给投影设备,投影设备即投射相对应的投影图案到投影区间段。In this embodiment, several motion states that the robot prefers can be preset as the basic projection pattern. For example, in an embodiment, the first projection pattern only contains two pieces of robot motion information: the movement direction and the arrival time. The arrival time is The robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection pattern generated at this moment after 3s. In this example, set the movement direction to be straight , left, right, the arrival time is 3s, 4s, 5s, then the basic projection pattern has 3, 4, 5s straight, 3, 4, 5s to the left, 3, 4, 5s to the right, a total of 9 types. 9 kinds of projection pattern numbers 01-09, when the robot is about to enter the projection area, it only needs to send the number between 01-09 to the projection device, and the projection device will project the corresponding projection pattern to the projection section.
基于图1所示实施例的描述,结合图2所示,步骤S20进一步包括:Based on the description of the embodiment shown in FIG. 1 and in conjunction with FIG. 2 , step S20 further includes:
步骤S21,第二机器人通过安装在其本体上的视觉装置捕捉所述第一投影图案;或/和安装在机器人作业环境中的视觉设备,捕捉所述第一投影图案。Step S21, the second robot captures the first projection pattern through a vision device installed on its body; or/and a vision device installed in the robot operating environment to capture the first projection pattern.
在该实施例中,第二机器人指除第一投影图案携带信息所对应的机器人外的其他任意机器人。第二机器人本体上安装有视觉装置,该视觉装置为摄像头或视觉模组或视觉传感器,用于抓拍或实时监控,以获取第一投影的图像信息。In this embodiment, the second robot refers to any other robot except the robot corresponding to the information carried by the first projection pattern. A vision device is installed on the second robot body, and the vision device is a camera, a vision module, or a vision sensor, and is used for snapshot or real-time monitoring to obtain the image information of the first projection.
可以理解的是,生成的第一投影图案为视觉装置容易捕捉到图像呈现形式,进一步地,视觉装置可以设置成捕捉到预设图案类型再进行上传、存储等处理,或者预设第二机器人进入到投影区域再打开视觉装置,进行实时捕捉并反馈图像到机器人处理系统。It can be understood that the generated first projection pattern is an image presentation form that the visual device can easily capture. Further, the visual device can be set to capture the preset pattern type and then upload, store and other processing, or preset the second robot to enter. Go to the projection area and turn on the vision device to capture and feed back the image to the robot processing system in real time.
在另一实施例中,安装在机器人作业环境中的视觉设备,捕捉第一投影图案。进一步地,视觉设备捕捉机器人作业环境中的投影图案,并将捕捉到的投影图案发送到机器人的处理模块,机器人的处理模块识别投影图案信息,并进行判断,采取避障措施,优选地,视觉设备可以分散设置在交叉路口段,一个视觉装置只负责一个交叉路段的投影图案捕捉,并只与该交叉路段区间的机器人进行通讯传递,该交叉路段区间的其他第二机器人接收第一投影图案并识别处理,结合自身的运动状态,进行相应避障措施。In another embodiment, a vision device installed in the robotic work environment captures the first projected pattern. Further, the vision device captures the projection pattern in the working environment of the robot, and sends the captured projection pattern to the processing module of the robot, and the processing module of the robot recognizes the projection pattern information, makes judgments, and takes obstacle avoidance measures. The equipment can be scattered at the intersection, and a visual device is only responsible for capturing the projection pattern of one intersection, and only communicates with the robot in the intersection section, and the other second robot in the intersection section receives the first projection pattern and transmits it. Identify and process, and take corresponding obstacle avoidance measures in combination with its own motion state.
该实施例的另一应用场景,视觉设备捕捉整个机器人作业环境的投影图案,即在机器人行走地图的所有投影图案,并将所有投影图案信息反馈到机器人管理系统平台,由平台进行处理,实时监控机器人运动状态,并分析高危路段的机器人运动情况,对碰撞风险较高的机器人做出避障指示。In another application scenario of this embodiment, the vision device captures the projection patterns of the entire robot operating environment, that is, all projection patterns on the robot walking map, and feeds back all projection pattern information to the robot management system platform for processing and real-time monitoring by the platform. The robot's motion status is analyzed, and the robot's motion in high-risk road sections is analyzed, and obstacle avoidance instructions are given to robots with a higher collision risk.
基于图1所示实施例的描述,结合图2所示,步骤S30进一步包括:Based on the description of the embodiment shown in FIG. 1 and in conjunction with FIG. 2 , step S30 further includes:
步骤31,设置在机器人上的处理装置接收所述第一投影图案,并识别所述投影图案对应的所述第一机器人的运动信息;或/和安装在机器人作业环境中的处理设备接收所述第一投影图案,识别所述第一投影图案对应的所述第一机器人的运动信息。Step 31: The processing device installed on the robot receives the first projection pattern, and identifies the motion information of the first robot corresponding to the projection pattern; or/and the processing device installed in the robot operating environment receives the first projection pattern. A first projection pattern, identifying motion information of the first robot corresponding to the first projection pattern.
在该实施例中,机器人上的处理装置接收第一投影图案,并识别第一投影图案对应的所述第一机器人的运动信息。可选的,处理装置单独安装在机器人本体上,作为一个独立的子处理系统,用于完成机器人的动态避障,可以理解的,处理装置也可以设置成在机器人中央处理器上集成的处理模块,第二机器人通过处理装置识别第一投影图案,得到第一机器人的运动信息,进而了解第一机器人的运动速度,运动方向,以及距离第一投影图案的位置,结合自身的运动速度、方向,判断其自身与第一机器人相遇的大致位置,并计算碰撞概率,如碰撞概率大于预设值,则进行避障措施,如减速、加速、停止、转向等动作。In this embodiment, the processing device on the robot receives the first projection pattern, and identifies motion information of the first robot corresponding to the first projection pattern. Optionally, the processing device is separately installed on the robot body, as an independent sub-processing system, used to complete the dynamic obstacle avoidance of the robot. It can be understood that the processing device can also be set as a processing module integrated on the robot central processing unit. , the second robot recognizes the first projection pattern through the processing device, obtains the movement information of the first robot, and then understands the movement speed, movement direction of the first robot, and the position away from the first projection pattern, combined with its own movement speed and direction, Determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
在另一实施例中,安装在机器人作业环境中的处理设备接收第一投影图案,识别第一投影图案对应的所述第一机器人的运动信息。优选的,处理设备设置在一个或多个投影区域之中,投影区域为一些预设路段,该路段为交叉路口或机器人运动频繁区域。In another embodiment, the processing device installed in the robot working environment receives the first projection pattern, and identifies the motion information of the first robot corresponding to the first projection pattern. Preferably, the processing device is arranged in one or more projection areas, and the projection areas are some preset road sections, and the road sections are intersections or areas where robots move frequently.
进一步地,处理设备设置有通信装置,用于接收投影区域的视觉装置发送的投影图案图像信息,并识别机器人的运动信息,结合投影区域内其他机器人的运动状态信息,计算碰撞概率,如碰撞概率大于预设值,则进行避障措施指示,如给投影区域内相对应的其他机器人发送减速、加速、停止、转向等动作指令。可选的,处理设备也可以为一个的服务器,用于统一处理一个仓库内所有投影区域的投影图案信息并做避障规划。Further, the processing device is provided with a communication device for receiving the projection pattern image information sent by the vision device in the projection area, identifying the motion information of the robot, and calculating the collision probability in combination with the motion state information of other robots in the projection area, such as the collision probability. If it is greater than the preset value, it will instruct the obstacle avoidance measures, such as sending deceleration, acceleration, stop, steering and other action commands to the corresponding other robots in the projection area. Optionally, the processing device may also be a server for uniformly processing the projection pattern information of all projection areas in a warehouse and making obstacle avoidance planning.
如图3所示,图3是本发明一种多机器人运动避障方法的的流程示意图。本发明一种多机器人运动避障方法可以实施为如下描述的步骤S100-S300。As shown in FIG. 3 , FIG. 3 is a schematic flowchart of a multi-robot motion obstacle avoidance method according to the present invention. A multi-robot motion obstacle avoidance method of the present invention may be implemented as steps S100-S300 described below.
步骤S100,第一机器人沿其行进方向的地面上投射出携带其自身运动信息的第一投影图案;Step S100, the first robot projects a first projection pattern carrying its own motion information on the ground along its travel direction;
步骤S200,第二机器人通过安装在其本体上的视觉装置捕捉所述第一投影图案;Step S200, the second robot captures the first projection pattern through a visual device installed on its body;
步骤S300,所述第二机器人识别所述第一投影图案对应的所述第一机器人的运动信息,并结合自身的运动信息进行相应的避障措施,以避免与所述第一机器人碰撞。Step S300, the second robot recognizes the motion information of the first robot corresponding to the first projection pattern, and performs corresponding obstacle avoidance measures in combination with its own motion information to avoid collision with the first robot.
结合图10所示,在该实施例中的步骤S100,第一机器人100是指在同一作业环境中或同一空间、同一地图中的机器人集群的任意一个机器人,当机器人接到任务时,每个机器人都会形成自身的路径规划,机器人沿着自已的规划路径行进,在其行进的路径上,即机器人行走方向的正前方,生成有第一投影图案,也就是说该第一投影图案的生成位置是机器人即将会到达的位置,第一投影图案跟随着第一机器人移动,可以预设第一投影图案与机器人的距离为2米,5米,10米等不同距离。10, in step S100 in this embodiment, the first robot 100 refers to any robot in the robot cluster in the same work environment or in the same space and in the same map. When the robot receives a task, each robot The robot will form its own path plan. The robot travels along its own planned path. On its travel path, that is, directly in front of the robot's walking direction, a first projection pattern is generated, that is, the generation position of the first projection pattern. It is the position that the robot is about to reach. The first projection pattern moves with the first robot. The distance between the first projection pattern and the robot can be preset as 2 meters, 5 meters, 10 meters and other different distances.
第一投影图案由第一机器人100自身投射生成,生成的第一投影图案携带有第一机器人的运动信息,机器人的运动信息包括机器人自身编码、速度、行驶方向、与投影图案距离等,这些信息可以通过不同的投影图案进行表示,投影图案为机器人可识别其语义的图案,例如QR(Quick Response)code二维码的形式。The first projection pattern is projected and generated by the first robot 100 itself, and the generated first projection pattern carries the motion information of the first robot. The motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information It can be represented by different projection patterns. The projection pattern is a pattern that the robot can recognize its semantics, such as the form of QR (Quick Response) code.
在本实施例中,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如从低速、中低速、中速、高速4个阶段分别取一个速度值,配合左转、右转、直行、停止等常规转向,再取几个投影距离值如2米、3米、5米、10米等,分别从以上属性中抽取数值,加上机器人自身编码组成基本投影图案,进一步地,也可以将投影距离与速度整合成一个时间属性,即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处。将以上基本投影图案事先给机器人学习训练,使得在实际作业场景中,机器人能够快速识别投影图案,并作出相应规划措施,进行避障。In this embodiment, several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern. The above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
其中,投影图案识别方法,可以使用预先准备的监督数据而通过机器训练来生成识别模型,并且通过使用所获得的识别模型来进行图像识别。进一步地,根据机器人运动属性的数量分割投影图案,如投影图案由机器人编号、方向、速度和投影距离四部分拼合而成,预设各部分的在投影图案中的方位,使用固定大小的小图像作为输入并输出小图像中的投影识别码的回归器的机器训练。然后,再将识别对象图像根据对应的预设分割成子区域之后,将各个子区域调整为固定大小以生成小图像,并且由训练后的回归器获得各个小图像中的运动属性数值。Among them, in the projection pattern recognition method, a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model. Further, the projection pattern is divided according to the number of robot motion attributes. For example, the projection pattern is composed of four parts: robot number, direction, speed and projection distance. The orientation of each part in the projection pattern is preset, and a small image of a fixed size is used. Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
在该实施例中的步骤S200,第二机器人200指除第一投影图案携带信息所对应的机器人外的其他任意机器人。第二机器人200本体上安装有视觉装置,该视觉装置为摄像头或视觉模组或视觉传感器,用于抓拍或实时监控,以获取第一投影的图像信息。In step S200 in this embodiment, the second robot 200 refers to any other robot except the robot corresponding to the information carried by the first projection pattern. A vision device is installed on the body of the second robot 200 , and the vision device is a camera, a vision module, or a vision sensor, and is used for snapshot or real-time monitoring to obtain the image information of the first projection.
可以理解的是,生成的第一投影图案为视觉装置容易捕捉到图像呈现形式,进一步地,视觉装置可以设置成捕捉到预设图案类型再进行上传、存储等处理,或者预设第二机器人200进入到投影区域再打开视觉装置,进行实时捕捉并反馈图像到机器人处理系统。It can be understood that the generated first projection pattern is an image presentation form that can be easily captured by the visual device. Further, the visual device can be set to capture a preset pattern type and then perform processing such as uploading, storing, etc., or preset the second robot 200. Enter the projection area and turn on the vision device to capture and feed back the image to the robot processing system in real time.
在一个实施例中,在该实施例中的步骤S300,第二机器人200上的处理装置接收第一投影图案,并识别第一投影图案对应的所述第一机器人的运动信息。可选的,处理装置单独安装在机器人本体上,作为一个独立的子处理系统,用于完成机器人的动态避障,可以理解的,处理装置也可以设置成在机器人中央处理器上集成的处理模块,第二机器人通过处理装置识别第一投影图案,得到第一机器人的运动信息,进而了解第一机器人的运动速度,运动方向,以及距离第一投影图案的位置,结合自身的运动速度、方向,判断其自身与第一机器人相遇的大致位置,并计算碰撞概率,如碰撞概率大于预设值,则进行避障措施,如减速、加速、停止、转向等动作。In one embodiment, in step S300 in this embodiment, the processing device on the second robot 200 receives the first projection pattern, and identifies motion information of the first robot corresponding to the first projection pattern. Optionally, the processing device is separately installed on the robot body, as an independent sub-processing system, used to complete the dynamic obstacle avoidance of the robot. It can be understood that the processing device can also be set as a processing module integrated on the robot central processing unit. , the second robot recognizes the first projection pattern through the processing device, obtains the movement information of the first robot, and then understands the movement speed, movement direction of the first robot, and the position away from the first projection pattern, combined with its own movement speed and direction, Determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
如图4所示,结合图3所示的实施例,图4是本发明多机器人避障方法的又一个实施例,步骤S300进一步包括:As shown in FIG. 4, combined with the embodiment shown in FIG. 3, FIG. 4 is another embodiment of the multi-robot obstacle avoidance method of the present invention, and step S300 further includes:
步骤S310,所述第二机器人200基于最优相互避免碰撞算法,根据所述第一机器人100当前的位置及速度或/和所述第一机器人100在所述第一投影图案的位置及速度,进行避障规划,改变自身的速度方向;或所述第二机器人200计算所述第一机器人100到达所述第一投影图案的第一时间,根据预设的时间间隔,改变其运动速度,使其比所述第一时间提前或延迟所述时间间隔达到所述第一投影图案。Step S310, the second robot 200 is based on the optimal mutual collision avoidance algorithm, according to the current position and speed of the first robot 100 or/and the position and speed of the first robot 100 in the first projection pattern, Perform obstacle avoidance planning and change its speed direction; or the second robot 200 calculates the first time when the first robot 100 reaches the first projection pattern, and changes its movement speed according to a preset time interval to make It reaches the first projection pattern earlier or later by the time interval than the first time.
在该步骤S310的一个实施例子中,机器人之间基于最优相互避免碰撞算法,根据第一机器人100当前的位置及速度或第一机器人100在第一投影图案的位置及速度,进行避障规划,改变自身的速度、方向。在该实施例子中,最优相互避免碰撞算法为ORCA(Optimal Reciprocal Collision Avoidance)避障算法进行投影区域的导航。可以理解的是,在该实施例子中,第一机器人100和第二机器人200是相对的,也就是说第二机器人200也可以投射出自身的投影图案,第一机器人100也能够捕捉并识别投影图案,此情景下的机器人即是第一机器人100,也是第二机器人200。具体地,第二机器人200通过识别投影图案信息,获取第一机器人100的速度与当前位置,第一机器人100同样也通过识别投影图案信息,获取第二机器人200的速度与当前位置,两个机器人采用同样的避障策略,共同选择新的速度进行避障。In an example of this step S310, based on the optimal mutual collision avoidance algorithm, the robots perform obstacle avoidance planning according to the current position and speed of the first robot 100 or the position and speed of the first robot 100 in the first projection pattern , change its speed and direction. In this embodiment, the optimal mutual collision avoidance algorithm is an ORCA (Optimal Reciprocal Collision Avoidance) obstacle avoidance algorithm to navigate the projection area. It can be understood that, in this embodiment, the first robot 100 and the second robot 200 are opposite, that is to say, the second robot 200 can also project its own projection pattern, and the first robot 100 can also capture and recognize the projection pattern, the robot in this scenario is the first robot 100 and the second robot 200 . Specifically, the second robot 200 obtains the speed and current position of the first robot 100 by identifying the projection pattern information, and the first robot 100 also obtains the speed and current position of the second robot 200 by identifying the projection pattern information. Adopt the same obstacle avoidance strategy and jointly select a new speed to avoid obstacles.
在另一个实施例子中,第二机器人200计算第一机器人100到达第一投影图案的第一时间,根据预设的时间间隔,改变其运动速度,使其比所述第一时间提前或延迟所述时间间隔达到所述第一投影图案。该实施例子中,预设时间间隔为1s、2s或3s等,进一步地,第二机器人200结合自身的运动信息,在不改变第二机器人200运动状态下,判断哪一个机器人先到达此刻的投影图案,如果第一机器人100先达到或同时达到,则控制自身减速至使其比第一机器人100延迟预设时间如2s后到达投影图案,如果判断第二机器人200先达到此刻的投影图案,则控制自身加速至使其比第一机器人100提前预设时间如2s后到达投影图案。可以理解的是,在该实施例中,只需第二机器人200做出避障措施,第一机器人100可以保持其自身的运动状态进行作业。一种可能的实施例子,根据机器人的作业空间及机器人行进的路段频率,将机器人行进的路径划分为主干道和次干道,主干道为机器人频繁经过的路段或执行重要任务所经过的路段,次干道为机器人分布较少的路段或执行次要任务的路段,主干道的机器人为第一机器人100,可以只投射投影图案而不捕捉投影图案,次干道的机器人为第二机器人200,只捕捉投影图案而不投射投影图案,换句话说第一机器人100行驶在主干道上,第二机器人200行驶在次干道上,第二机器人200主动进行避障,而第一机器人100可以保持自身期望速度行进,以完成较多的任务。In another embodiment, the second robot 200 calculates the first time when the first robot 100 arrives at the first projection pattern, and changes its movement speed according to a preset time interval to make it advance or delay the first time by a certain amount. The time interval reaches the first projection pattern. In this embodiment, the preset time interval is 1 s, 2 s or 3 s, etc. Further, the second robot 200 determines which robot reaches the projection at the moment first without changing the motion state of the second robot 200 in combination with its own motion information. pattern, if the first robot 100 reaches the projection pattern first or at the same time, then control itself to decelerate until it reaches the projection pattern after a preset time delay, such as 2s, than the first robot 100. If it is judged that the second robot 200 reaches the projection pattern at this moment first, then Control itself to accelerate until it reaches the projected pattern after a preset time such as 2s ahead of the first robot 100 . It can be understood that, in this embodiment, as long as the second robot 200 takes obstacle avoidance measures, the first robot 100 can maintain its own motion state to perform operations. A possible implementation example: According to the working space of the robot and the frequency of the road sections that the robot travels, the path traveled by the robot is divided into main roads and secondary roads. The main road is a road section with few robots or a road section that performs secondary tasks. The robot on the main road is the first robot 100, which can only project the projection pattern without capturing the projection pattern. The robot on the secondary road is the second robot 200, which only captures the projection pattern. The pattern does not project a projected pattern, in other words, the first robot 100 drives on the main road, the second robot 200 drives on the secondary road, the second robot 200 actively avoids obstacles, and the first robot 100 can maintain its own desired speed. , to complete more tasks.
基于图3所示实施例的描述,如图5所示,图5是多机器人运动避障方法的另一种实施方式的流程示意图。图5所述实施例在图2所述实施例的“步骤S300,所述第二机器人识别所述第一投影图案对应的所述第一机器人的运动信息,并结合自身的运动信息进行相应的避障措施,以避免与所述第一机器人碰撞”,之后还执行步骤S400。Based on the description of the embodiment shown in FIG. 3 , as shown in FIG. 5 , FIG. 5 is a schematic flowchart of another implementation manner of a multi-robot motion obstacle avoidance method. In the embodiment shown in FIG. 5 , in “step S300 of the embodiment shown in FIG. 2 , the second robot identifies the motion information of the first robot corresponding to the first projection pattern, and performs corresponding motion information in combination with its own motion information. obstacle avoidance measures to avoid collision with the first robot", and then step S400 is also executed.
步骤S400,所述第二机器人在避障措施后沿其行进方向的地面上投射出第二投影图案。Step S400 , the second robot projects a second projection pattern on the ground along its travel direction after the obstacle avoidance measure.
在该实施例中,第二投影图案表示第二机器人200在作出避障规划后的运动信息,例如第二机器人200在识别第一机器人100的投影图案后,结合自身的运动状态,经过计算得到有较大可能发生碰撞,需要采取避障措施,将原本2m/s的速度降为0.5m/s,当降速完成时,在第二机器人200当前行进路径上,即避障规划后的行进路径,生成第二投影图案。In this embodiment, the second projection pattern represents the motion information of the second robot 200 after making the obstacle avoidance plan. For example, after the second robot 200 recognizes the projection pattern of the first robot 100 and combines its own motion state, it is calculated to obtain There is a high possibility of collision, and obstacle avoidance measures need to be taken to reduce the original speed of 2m/s to 0.5m/s. When the speed reduction is completed, the second robot 200 is on the current travel path, that is, the travel after obstacle avoidance planning. path to generate a second projection pattern.
需要说明的是,投影图案并不是根据机器人运动状态实时跟新的,即从2m/s下降到0.5m/s的过程,投影图案是不变的,当完成避障规划动作后,才更新投影图案,即从第一投影图案更换到第二投影图案;另一方面,当机器人遇到静态障碍物时,可能会暂时改变其运动状态,比如减速行驶通过,在这个静态避障过程中,虽然也完成了先减速再恢复到原有速度的改变,但可以不改变其投影图案,使其保持着第一投影图案。这样一来,可以减少投影图案的变换次数,同时,也使得投影图案的样式不至于过多,减轻了识别处理难度,方便机器人快速识别操作,也较容易达到投影装置的性能要求,提高实用性。It should be noted that the projection pattern is not updated in real time according to the motion state of the robot, that is, during the process of decreasing from 2m/s to 0.5m/s, the projection pattern remains unchanged, and the projection is updated only after the obstacle avoidance planning action is completed. pattern, that is, changing from the first projection pattern to the second projection pattern; on the other hand, when the robot encounters a static obstacle, it may temporarily change its motion state, such as slowing down and passing through. In this static obstacle avoidance process, although The change of decelerating first and then returning to the original speed is also completed, but the projection pattern may not be changed to keep the first projection pattern. In this way, the number of times of transformation of the projection pattern can be reduced, and at the same time, the pattern of the projection pattern will not be too many, the difficulty of identification processing is reduced, the rapid identification and operation of the robot is facilitated, and it is easier to meet the performance requirements of the projection device and improve the practicability. .
在本实施例中,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如从低速、中低速、中速、高速4个阶段分别取一个速度值,配合左转、右转、直行、停止等常规转向,再取几个投影距离值如2米、3米、5米、10米等,分别从以上属性中抽取数值,加上机器人自身编码组成基本投影图案,进一步地,也可以将投影距离与速度整合成一个时间属性,即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处。将以上基本投影图案事先给机器人学习训练,使得在实际作业场景中,机器人能够快速识别投影图案,并作出相应规划措施,进行避障。In this embodiment, several preset motion states that the robot prefers can be used as the basic projection pattern. Go straight, stop and other conventional steering, and then take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. Further, also The projection distance and speed can be integrated into a time attribute, that is, the robot keeps its motion state unchanged, and the time interval between reaching the first projection pattern, such as 3s, 5s, etc., means that the robot will reach the first projection generated at this moment after 3s pattern. The above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
其中,投影图案识别方法,可以使用预先准备的监督数据而通过机器训练来生成识别模型,并且通过使用所获得的识别模型来进行图像识别。进一步地,根据机器人运动属性的数量分割投影图案,如投影图案由机器人编号、方向、速度和投影距离四部分拼合而成,预设各部分的在投影图案中的方位,使用固定大小的小图像作为输入并输出小图像中的投影识别码的回归器的机器训练。然后,再将识别对象图像根据对应的预设分割成子区域之后,将各个子区域调整为固定大小以生成小图像,并且由训练后的回归器获得各个小图像中的运动属性数值。Among them, in the projection pattern recognition method, a recognition model can be generated by machine training using pre-prepared supervision data, and image recognition can be performed by using the obtained recognition model. Further, the projection pattern is divided according to the number of robot motion attributes. For example, the projection pattern is composed of four parts: robot number, direction, speed and projection distance. The orientation of each part in the projection pattern is preset, and a small image of a fixed size is used. Machine training of a regressor that takes as input and outputs projected identification codes in small images. Then, after the recognition object image is divided into sub-regions according to the corresponding preset, each sub-region is adjusted to a fixed size to generate a small image, and the trained regressor obtains the motion attribute value in each small image.
如图6所示,本实施例公开了一种避障机器人的装置结构,该机器人包括:As shown in FIG. 6 , the present embodiment discloses a device structure of an obstacle avoidance robot, and the robot includes:
可自主移动机器人主体90,该机器人主体90具有完整的自主移动机器人功能如AMR(Autonomous Mobile Robot)。The robot body 90 can be moved autonomously, and the robot body 90 has complete functions of an autonomous mobile robot such as AMR (Autonomous Mobile Robot).
投影装置60,投影装置60相对固定设置在所述机器人主体上;投影装置60包括电子显示和若干投影片,用于机器人投射出不同的投影图案。The projection device 60 is relatively fixedly arranged on the robot body; the projection device 60 includes an electronic display and several projection films for the robot to project different projection patterns.
视觉装置70,视觉装置70安装在与投影装置60同一侧的机器人主体上,用于捕捉机器人行进区域的投影图案。The vision device 70, which is installed on the robot body on the same side as the projection device 60, is used to capture the projection pattern of the robot's travel area.
在该实施例中,投影装置60可以通过自动切换投影片,投射出不同的投影图案,每个投影图案都表示不同的机器人运动信息,具体地,投影装置60内包括多个投影片,通过控制一个或多个投影光源的投射角度来切换使用投影片。In this embodiment, the projection device 60 can automatically switch the projection films to project different projection patterns, and each projection pattern represents different robot motion information. Specifically, the projection device 60 includes a plurality of projection films. The projection angle of one or more projection light sources to switch the use of slides.
可选的,通过图案和颜色结合生成更多的投影图案,投影仪包括各色激光光源1R、激光光源1G、激光光源1B、准直透镜(collimator lens)、多透镜(lenticular lens)、空间调制元件、投影透镜及分色镜(dichroic mirror)。从激光光源1R、1G、1B依次射出红色、蓝色、绿色的激光。绿色激光通过准直透镜成为大致平行的光之后,被镜反射,并透过分色镜。蓝色激光通过准直透镜成为大致平行的光之后,通过分色镜而与绿色激光合波。红色激光通过准直透镜成为大致平行的光之后,通过分色镜而与绿色激光及蓝色激光合波。合波的激光通过多透镜而成为扩散光,并射入空间调制元件。空间调制元件基于周期性主图像信号对入射光进行调制。投影透镜将经过空间调制元件调制的光投影于地面,赋予图案可色彩标识,在该实施例中,视觉装置进一步包括图像修正控制器,视觉装置捕捉由投影透镜投影出的光显示的图像,通过图像修正控制器处理后,转换成机器可识别图像或机器人事先训练学习的图像标识,将处理后的投影图案发送到附近区域机器人。Optionally, more projection patterns can be generated by combining patterns and colors. The projector includes a laser light source 1R of various colors, a laser light source 1G, a laser light source 1B, a collimator lens, and a lenticular lens. lens), spatial modulation elements, projection lenses and dichroic mirrors. The laser light sources 1R, 1G, and 1B emit red, blue, and green laser light in this order. After passing through the collimator lens, the green laser beam becomes substantially parallel light, which is reflected by the mirror and then transmitted through the dichroic mirror. The blue laser beam passes through a collimator lens to become substantially parallel light, and then passes through a dichroic mirror to combine with the green laser beam. After the red laser beam passes through the collimator lens and becomes substantially parallel light, it is combined with the green laser beam and the blue laser beam through a dichroic mirror. The multiplexed laser light passes through the multi-lens to become diffused light, and enters the spatial modulation element. The spatial modulation element modulates the incident light based on the periodic main image signal. The projection lens projects the light modulated by the spatial modulation element on the ground, and gives the pattern color identification. In this embodiment, the visual device further includes an image correction controller, and the visual device captures the image displayed by the light projected by the projection lens. After the image correction controller is processed, it is converted into a machine-recognizable image or an image identifier that the robot has trained and learned in advance, and the processed projection pattern is sent to the robot in the nearby area.
在该实施例中,当机器人需要作业而开始运动时,机器人向投影装置发送开启消息。开启消息格式示意如下:id,angel,time,shift。一条消息包括四个字段,分别是序列号、角度、时间和切换位,其中,序列号起标识作用,各消息指令的序列号不同,以此区分不同的指令,控制相应的机器人及其投影装置,角度用于指示投影装置的投影角度,投射角度根据机器人的运动速度进行改变,机器人运行的速度越快,投影的图像距离机器人越远;时间用于控制投影的时长,如机器人行驶在宽敞的直行路段,可以间歇性控制投影时间;shift是切换位的值,切换位属性的取值有多种情况,即每个取值代表投影投射的图案。In this embodiment, when the robot needs to work and starts to move, the robot sends an on message to the projection device. The opening message format is shown as follows: id, angel, time, shift. A message includes four fields, namely serial number, angle, time and switching bit. The serial number is used for identification, and the serial number of each message command is different, so as to distinguish different commands and control the corresponding robot and its projection device. , the angle is used to indicate the projection angle of the projection device, and the projection angle is changed according to the movement speed of the robot. The faster the robot runs, the farther the projected image is from the robot; time is used to control the duration of projection, such as when the robot travels in a spacious In the straight section, the projection time can be controlled intermittently; shift is the value of the switching bit, and the value of the switching bit attribute can be in many cases, that is, each value represents the pattern projected by the projection.
在该实施例中,当机器人自身的视觉装置70捕捉到自身投影装置60投射的投影图案时,即当第一机器人100捕捉到携带有自身的运动状态的第一投影图案时,第一机器人100可以默认不做处理,或着确认在其行进路径上是否正常显示投影图案,如果捕捉不到自身的投影图案,则进行异常反馈。In this embodiment, when the robot's own vision device 70 captures the projection pattern projected by its own projection device 60 , that is, when the first robot 100 captures the first projection pattern carrying its own motion state, the first robot 100 You can do nothing by default, or check whether the projection pattern is displayed normally on its travel path. If it cannot capture its own projection pattern, it will give an abnormal feedback.
基于图6所示的实施例描述,如图7所示,本发明的另一个实施例中,机器人还包括处理装置80,处理装置80用于识别投影图案并控制机器人进行相应的避障措施。Based on the description of the embodiment shown in FIG. 6 , as shown in FIG. 7 , in another embodiment of the present invention, the robot further includes a processing device 80 for recognizing the projection pattern and controlling the robot to perform corresponding obstacle avoidance measures.
在该实施例中,可以在机器人本体上单独设置一个处理装置80,用来识别第一投影图案并做进一步处理,如识别投影图案信息,获取投影图案相对应的机器人运动信息,并结合自身机器人的运动状态进行避障处理。一种可能的实施例,只需在需要进行避障措施的第二机器人本体上安装有处理装置80。根据机器人的作业空间及机器人行进的路段频率,将机器人行进的路径划分为主干道和次干道,主干道为机器人频繁经过的路段或执行重要任务所经过的路段,次干道为机器人分布较少的路段或执行次要任务的路段,主干道的机器人为第一机器人100,可以只投射投影图案而不捕捉投影图案,次干道的机器人为第二机器人200,只捕捉投影图案而不投射投影图案,换句话说第一机器人100行驶在主干道上,第二机器人200行驶在次干道上,第二机器人200主动进行避障,而第一机器人100可以保持自身期望速度行进,以完成较多的任务。In this embodiment, a processing device 80 can be separately provided on the robot body to recognize the first projection pattern and perform further processing, such as recognizing projection pattern information, acquiring robot motion information corresponding to the projection pattern, and combining with its own robot The motion state is used for obstacle avoidance processing. In a possible embodiment, the processing device 80 only needs to be installed on the second robot body that needs to perform obstacle avoidance measures. According to the working space of the robot and the frequency of the road sections traveled by the robot, the path traveled by the robot is divided into main roads and secondary roads. The main roads are the roads that robots frequently pass through or perform important tasks. The road section or the road section where the secondary task is performed, the robot on the main road is the first robot 100, which can only project the projected pattern without capturing the projected pattern, and the robot on the secondary road is the second robot 200, which only captures the projected pattern without projecting the projected pattern, In other words, the first robot 100 drives on the main road, the second robot 200 drives on the secondary road, the second robot 200 actively avoids obstacles, and the first robot 100 can keep its desired speed to complete more tasks .
另一可能的实施例中,处理装置80可以通过处理模块的形式集成或嵌入到机器人中央处理系统中,用于识别第一投影图案,并结合自身的运动信息,进行避障控制。具体地,第二机器人通过处理模块识别第一投影图案,得到第一机器人的运动信息,进而了解第一机器人的运动速度,运动方向,以及距离第一投影图案的位置,结合自身的运动速度、方向,判断其自身与第一机器人相遇的大致位置,并计算碰撞概率,如碰撞概率大于预设值,则进行避障措施,如减速、加速、停止、转向等动作。In another possible embodiment, the processing device 80 may be integrated or embedded in the robot central processing system in the form of a processing module, for identifying the first projection pattern, and combining its own motion information to perform obstacle avoidance control. Specifically, the second robot recognizes the first projection pattern through the processing module, obtains the movement information of the first robot, and then learns the movement speed, movement direction, and position of the first robot from the first projection pattern, combined with its own movement speed, direction, determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, it will take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
结合图8、图9所示,在没有投影装置时,第一机器人100相对于第二机器人200呈直角行驶,第二机器人200由于墙壁阻隔无法观察到第一机器人100本体,同时也无法预知第一机器人100接下来的行进路线。直到第一机器人100和第二机器人200移动到较近距离内,其激光雷达扫描到对方轮廓,才触发避障行动。此时规避效果较差,且有可能避让不及时而发生碰撞。8 and 9, when there is no projection device, the first robot 100 travels at a right angle relative to the second robot 200, and the second robot 200 cannot observe the body of the first robot 100 due to the wall, and also cannot predict the first robot 100. The next travel route of a robot 100. The obstacle avoidance action is not triggered until the first robot 100 and the second robot 200 move within a relatively short distance and their lidars scan the outline of each other. At this time, the avoidance effect is poor, and it is possible that the avoidance is not timely and a collision occurs.
在有投影装置时,第二机器人200前方地面投影出包含“机器人JX02,正在直行,速度1.5m/s,距离6m”语义的二维码图案。虽然第一机器人100的激光雷达尚未扫描到第二机器人200的轮廓,但是,第一机器人的视觉装置已经捕捉到第二机器人的二维码,识别并根据二维码“机器人JX02,正在直行,速度1.5m/s,距离6m”的语义进行计算,做出决策,提前采取了规避动作,该实施方式中的规避效果好,尤其是在机器人高速运行情况下,能够更早的获得其他机器人的动态信息,取得更多的反应时间以做出相应的避障动作。When there is a projection device, the second robot 200 projects a two-dimensional code pattern containing the semantics of "Robot JX02, going straight, speed 1.5m/s, distance 6m" on the ground in front of the second robot 200 . Although the lidar of the first robot 100 has not yet scanned the outline of the second robot 200, the vision device of the first robot has captured the two-dimensional code of the second robot, and recognizes it according to the two-dimensional code "Robot JX02, is going straight, The speed is 1.5m/s and the distance is 6m”. The semantics is calculated, the decision is made, and the avoidance action is taken in advance. The avoidance effect in this embodiment is good, especially when the robot is running at a high speed, it can obtain other robots earlier. Dynamic information, get more reaction time to make corresponding obstacle avoidance actions.
再结合图9,本申请的一种可选实施例,根据机器人运行的地图信息,第一机器人100为横向行驶的机器人,第一机器人100设置有投影装置,在行驶路径上投射二维码图案,第二机器人200为纵向行驶的机器人,第二机器人200设置有视觉装置,用来捕捉行驶路径上的二维码图案,在该可选实施例中,只需第二机器人200进行避障动作,以避让第一机器人100,第一机器人100不做任何规避动作,只负责将自身的运动信息投射到行进的地面上,即只传递信息,第二机器人200只接收信息并决策。Referring to FIG. 9 again, in an optional embodiment of the present application, according to the map information of the robot operation, the first robot 100 is a robot that travels laterally, and the first robot 100 is provided with a projection device that projects a two-dimensional code pattern on the travel path. , the second robot 200 is a robot that travels longitudinally, and the second robot 200 is provided with a vision device to capture the two-dimensional code pattern on the travel path. In this optional embodiment, only the second robot 200 needs to perform obstacle avoidance actions , in order to avoid the first robot 100, the first robot 100 does not do any evasive action, and is only responsible for projecting its own motion information on the traveling ground, that is, only transmitting information, and the second robot 200 only receives information and makes decisions.
如图10所示,本申请公开了一种多机器人动态避障系统。该避障系统包括:As shown in FIG. 10 , the present application discloses a multi-robot dynamic obstacle avoidance system. The obstacle avoidance system includes:
若干机器人,若干机器人在同一作业环境中自主运动;Several robots, several robots move autonomously in the same working environment;
投影模块600,响应于所述机器人自主运动,在所述机器人的行进路径中生成不同的投影图案,所述投影图案携带有所述机器人的运动信息;The projection module 600, in response to the autonomous movement of the robot, generates different projection patterns in the travel path of the robot, and the projection patterns carry the motion information of the robot;
视觉模块700,用于捕捉所述机器人行进路径中的所述投影图案;a vision module 700, configured to capture the projected pattern in the travel path of the robot;
处理模块800,用于识别所述投影图案相对应的机器人运动信息,并结合另一机器人的运动信息,控制所述另一机器人进行相应的避障措施。The processing module 800 is configured to identify the motion information of the robot corresponding to the projection pattern, and control the other robot to perform corresponding obstacle avoidance measures in combination with the motion information of the other robot.
在该实施例中,投影模块600跟随着机器人运动进行实时投影,且与该机器人当前的运动状态相对应。具体地,预设第一投影图案与机器人的距离为2米,5米,10米等不同距离,可以理解的是,机器人的运动速度越大,其投影距离越远。机器人的运动信息包括机器人自身编码、速度、行驶方向、与投影图案距离等,这些信息可以通过不同的投影图案进行表示,投影图案为机器人可识别其语义的图案,例如QR(Quick Response)code二维码的形式。In this embodiment, the projection module 600 performs real-time projection along with the movement of the robot, and corresponds to the current movement state of the robot. Specifically, the preset distances between the first projection pattern and the robot are different distances such as 2 meters, 5 meters, and 10 meters. It can be understood that the greater the movement speed of the robot, the longer the projection distance. The motion information of the robot includes the robot's own code, speed, driving direction, distance from the projection pattern, etc. These information can be represented by different projection patterns. The projection pattern is a pattern that the robot can recognize its semantics, such as QR (Quick Response) code II. dimensional code form.
进一步地,投影模块600可投射出不同的投影图案,每一投影图案都对应着不同的机器人运动信息。具体地,可以通过预设几种机器人喜好的运动状态作为基本的投影图案,例如从低速、中低速、中速、高速4个阶段分别取一个速度值,配合左转、右转、直行、停止等常规转向,再取几个投影距离值如2米、3米、5米、10米等,分别从以上属性中抽取数值,加上机器人自身编码组成基本投影图案,进一步地,也可以将投影距离与速度整合成一个时间属性,即机器人保持其运动状态不变,到达第一投影图案的时间间隔,如3s、5s等,也就是说机器人3s后会达到此刻生成的第一投影图案处。将以上基本投影图案事先给机器人学习训练,使得在实际作业场景中,机器人能够快速识别投影图案,并作出相应规划措施,进行避障。Further, the projection module 600 can project different projection patterns, and each projection pattern corresponds to different robot motion information. Specifically, several preset motion states that the robot likes can be used as the basic projection pattern, for example, a speed value is taken from the four stages of low speed, medium and low speed, medium speed, and high speed, and it is matched with turning left, turning right, going straight, and stopping. After waiting for the normal turning, take several projection distance values such as 2 meters, 3 meters, 5 meters, 10 meters, etc., extract the values from the above attributes, and add the robot's own code to form the basic projection pattern. The distance and speed are integrated into a time attribute, that is, the time interval for the robot to keep its motion state unchanged and reach the first projection pattern, such as 3s, 5s, etc., that is to say, the robot will reach the first projection pattern generated at this moment after 3s. The above basic projection patterns are taught to the robot in advance, so that in the actual operation scene, the robot can quickly identify the projection pattern, and make corresponding planning measures to avoid obstacles.
在该实施例中,视觉模块700用于捕捉视觉范围内的投影图案,视觉模块700可以设置在机器人上,伴随着机器人运动,动态捕捉机器人运动路径上的投影图案,也可以相对固定设置在机器人工作环境中,实时监测一些区域的投影图案。在该实施例中,处理模块700可以单独设置,用来识别第一投影图案并做进一步处理,如识别投影图案信息并对该投影图案相近区域的机器人进行统一的计算处理后,再对该区域的机器人进行调度。In this embodiment, the vision module 700 is used to capture the projection pattern within the visual range. The vision module 700 can be arranged on the robot, and along with the movement of the robot, dynamically capture the projection pattern on the movement path of the robot, or it can be relatively fixedly arranged on the robot In the working environment, the projection pattern of some areas is monitored in real time. In this embodiment, the processing module 700 can be set separately to identify the first projection pattern and perform further processing. For example, after identifying the projection pattern information and performing unified computing processing on the robot in the area similar to the projection pattern, then the area robot for scheduling.
优选的,在机器人本体上设置处理模块800,或将处理模块800集成或嵌入到机器人中央处理系统中,用于识别第一投影图案,并进一步整合处理。具体地,第二机器人通过处理模块800识别第一投影图案,得到第一机器人的运动信息,进而了解第一机器人的运动速度,运动方向,以及距离第一投影图案的位置,结合自身的运动速度、方向,判断其自身与第一机器人相遇的大致位置,并计算碰撞概率,如碰撞概率大于预设值,则进行避障措施,如减速、加速、停止、转向等动作。Preferably, the processing module 800 is provided on the robot body, or the processing module 800 is integrated or embedded in the robot central processing system, for recognizing the first projection pattern and further integrating the processing. Specifically, the second robot recognizes the first projection pattern through the processing module 800, obtains the motion information of the first robot, and then understands the movement speed, the movement direction, and the position from the first projection pattern of the first robot, combined with its own movement speed , direction, determine the approximate position where it meets the first robot, and calculate the collision probability. If the collision probability is greater than the preset value, take obstacle avoidance measures, such as deceleration, acceleration, stop, steering and other actions.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
工业实用性Industrial Applicability
本发明提供的一种机器人运动信息识别方法、避障方法及避障机器人、避障系统。在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案,视觉捕捉所述第一投影图案,识别所述第一投影图案携带的所述机器人运动信息。其中,通过投影图案传递机器人运动信息,并配合视觉捕捉投影图案进行识别处理,进一步采取避障规划,使得机器人能在激光雷达识别到其他机器人之前就能够获得路线上其他机器人的行驶相关信息,从而进行预先规避,一方面可以提升安全性,减小拐角碰撞的几率;另一方面可以在更安全的基础上提升运行速度,增加效率。因此,具有工业实用性。The invention provides a robot motion information identification method, an obstacle avoidance method, an obstacle avoidance robot, and an obstacle avoidance system. On the current travel path of the robot, a first projection pattern carrying the motion information of the robot is generated, the first projection pattern is visually captured, and the motion information of the robot carried by the first projection pattern is recognized. Among them, the motion information of the robot is transmitted through the projection pattern, and the visual capture projection pattern is used for identification processing, and obstacle avoidance planning is further adopted, so that the robot can obtain the driving information of other robots on the route before the lidar recognizes other robots. Pre-evasion can improve safety and reduce the probability of corner collision on the one hand; on the other hand, it can improve the running speed and increase efficiency on the basis of safety. Therefore, it has industrial applicability.

Claims (10)

  1. 一种机器人运动信息识别方法,包括:A method for identifying motion information of a robot, comprising:
    在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案;On the current travel path of the robot, a first projection pattern carrying the motion information of the robot is generated;
    视觉捕捉所述第一投影图案;visually capturing the first projected pattern;
    识别所述第一投影图案携带的所述机器人运动信息。Identify the robot motion information carried by the first projection pattern.
  2. 如权利要求1所述的信息识别方法,其中,所述在机器人当前行进路径上,生成携带有所述机器人运动信息的第一投影图案包括:The information identification method according to claim 1, wherein, on the current travel path of the robot, generating the first projection pattern carrying the motion information of the robot comprises:
    第一机器人沿其行进方向的地面上投射出所述第一投影图案;或/和第一机器人作业环境中的外部投影设备投射所述第一投影图案。The first projection pattern is projected on the ground of the first robot along its travel direction; or/and the first projection pattern is projected by an external projection device in the working environment of the first robot.
  3. 如权利要求1所述的信息识别方法,其中,所述视觉捕捉所述第一投影图案包括:The information identification method of claim 1, wherein the visually capturing the first projection pattern comprises:
    第二机器人通过安装在其本体上的视觉装置捕捉所述第一投影图案;或/和安装在机器人作业环境中的视觉设备,捕捉所述第一投影图案。The second robot captures the first projection pattern through a vision device installed on its body; or/and a vision device installed in the robot working environment to capture the first projection pattern.
  4. 如权利要求1所述的信息识别方法,其中,所述识别所述第一投影图案携带的所述第一机器人运动信息,包括:The information identification method according to claim 1, wherein the identifying the first robot motion information carried by the first projection pattern comprises:
    设置在机器人上的处理装置接收所述第一投影图案,并识别所述投影图案对应的所述第一机器人的运动信息;或/和The processing device provided on the robot receives the first projection pattern, and identifies the motion information of the first robot corresponding to the projection pattern; or/and
    安装在机器人作业环境中的处理设备接收所述第一投影图案,识别所述第一投影图案对应的所述第一机器人的运动信息。The processing device installed in the robot working environment receives the first projection pattern, and identifies motion information of the first robot corresponding to the first projection pattern.
  5. 一种多机器人运动避障方法,包括:A multi-robot motion obstacle avoidance method, comprising:
    第一机器人沿其行进方向的地面上投射出携带其自身运动信息的第一投影图案; The first robot projects a first projection pattern carrying its own motion information on the ground along its travel direction;
    第二机器人通过安装在其本体上的视觉装置捕捉所述第一投影图案;The second robot captures the first projected pattern through a vision device installed on its body;
    所述第二机器人识别所述第一投影图案对应的所述第一机器人的运动信息,并结合自身的运动信息进行相应的避障措施,以避免与所述第一机器人碰撞。The second robot recognizes the motion information of the first robot corresponding to the first projection pattern, and performs corresponding obstacle avoidance measures in combination with its own motion information to avoid collision with the first robot.
  6. 如权利要求5所述的避障方法,其中,所述第二机器人结合自身的运动信息进行相应的避障措施包括:The obstacle avoidance method according to claim 5, wherein the second robot performs corresponding obstacle avoidance measures in combination with its own motion information, comprising:
    所述第二机器人基于最优相互避免碰撞算法,根据所述第一机器人当前的位置及速度或/和所述第一机器人在所述第一投影图案的位置及速度,进行避障规划,改变自身的速度、方向;或Based on the optimal mutual collision avoidance algorithm, the second robot performs obstacle avoidance planning according to the current position and speed of the first robot or/and the position and speed of the first robot in the first projection pattern, and changes own speed, direction; or
    所述第二机器人计算所述第一机器人到达所述第一投影图案的第一时间,根据预设的时间间隔,改变其运动速度,使其比所述第一时间提前或延迟所述时间间隔达到所述第一投影图案。The second robot calculates the first time when the first robot reaches the first projection pattern, and changes its movement speed according to a preset time interval to make it advance or delay the first time by the time interval The first projection pattern is reached.
  7. 如权利要求5所述的避障方法,其中,所述第二机器人结合自身的运动信息进行相应的避障措施后,还包括:The obstacle avoidance method according to claim 5, wherein after the second robot performs corresponding obstacle avoidance measures in combination with its own motion information, it further comprises:
    所述第二机器人在避障措施后沿其行进方向的地面上投射出第二投影图案。The second robot projects a second projection pattern on the ground along its travel direction after the obstacle avoidance measure.
  8. 一种可动态避障的机器人,包括:A robot capable of dynamic obstacle avoidance, comprising:
    可自主移动机器人主体;The main body of the robot can be moved autonomously;
    投影装置,所述投影装置相对固定设置在所述机器人主体上;所述投影装置包括电子显示和若干投影片,用于所述机器人投射出不同的投影图案;a projection device, which is relatively fixedly arranged on the robot body; the projection device includes an electronic display and a plurality of projection films for the robot to project different projection patterns;
    视觉装置,所述视觉装置安装在与所述投影装置同一侧的所述机器人主体上,用于捕捉所述机器人行进区域的投影图案。A vision device, which is installed on the robot body on the same side as the projection device, and is used to capture the projection pattern of the robot's travel area.
  9. 如权利要求8所述的机器人,其中,所述机器人还包括处理装置,所述处理装置用于识别所述投影图案并控制机器人进行相应的避障措施。The robot according to claim 8, wherein the robot further comprises a processing device for recognizing the projection pattern and controlling the robot to perform corresponding obstacle avoidance measures.
  10. 一种多机器人避障系统,其中,包括:若干机器人,所述若干机器人在同一作业环境中自主运动;A multi-robot obstacle avoidance system, comprising: a plurality of robots that move autonomously in the same working environment;
    投影模块,响应于所述机器人自主运动,在所述机器人的行进路径中生成不同的投影图案,所述投影图案携带有所述机器人的运动信息;a projection module, in response to the autonomous motion of the robot, generating different projection patterns in the travel path of the robot, the projection patterns carrying the motion information of the robot;
    视觉模块,用于捕捉所述机器人行进路径中的所述投影图案;a vision module for capturing the projected pattern in the travel path of the robot;
    处理模块,用于识别所述投影图案相对应的机器人运动信息,并结合另一机器人的运动信息,控制所述另一机器人进行相应的避障措施。The processing module is used for identifying the motion information of the robot corresponding to the projection pattern, and combining the motion information of the other robot to control the other robot to perform corresponding obstacle avoidance measures.
PCT/CN2021/106897 2020-07-23 2021-07-16 Robot motion information recognition method, obstacle avoidance method, robot capable of obstacle avoidance, and obstacle avoidance system WO2022017296A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010715353.5A CN111844038B (en) 2020-07-23 2020-07-23 Robot motion information identification method, obstacle avoidance robot and obstacle avoidance system
CN202010715353.5 2020-07-23

Publications (1)

Publication Number Publication Date
WO2022017296A1 true WO2022017296A1 (en) 2022-01-27

Family

ID=72949824

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106897 WO2022017296A1 (en) 2020-07-23 2021-07-16 Robot motion information recognition method, obstacle avoidance method, robot capable of obstacle avoidance, and obstacle avoidance system

Country Status (2)

Country Link
CN (1) CN111844038B (en)
WO (1) WO2022017296A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111844038B (en) * 2020-07-23 2022-01-07 炬星科技(深圳)有限公司 Robot motion information identification method, obstacle avoidance robot and obstacle avoidance system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324196A (en) * 2013-06-17 2013-09-25 南京邮电大学 Multi-robot path planning and coordination collision prevention method based on fuzzy logic
CN204819543U (en) * 2015-06-24 2015-12-02 燕山大学 Centralized control formula multirobot motion control system
CN106527432A (en) * 2016-11-04 2017-03-22 浙江大学 Indoor mobile robot cooperative system based on fuzzy algorithm and two-dimensional code self correction
CN107168337A (en) * 2017-07-04 2017-09-15 武汉视览科技有限公司 A kind of mobile robot path planning and dispatching method of view-based access control model identification
CN108303972A (en) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 The exchange method and device of mobile robot
US20190163196A1 (en) * 2017-11-28 2019-05-30 Postmates Inc. Light Projection System
US20190384309A1 (en) * 2018-06-18 2019-12-19 Zoox, Inc. Occlusion aware planning
US20200009734A1 (en) * 2019-06-18 2020-01-09 Lg Electronics Inc. Robot and operating method thereof
CN111844038A (en) * 2020-07-23 2020-10-30 炬星科技(深圳)有限公司 Robot motion information identification method, obstacle avoidance robot and obstacle avoidance system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856227A (en) * 2016-04-18 2016-08-17 呼洪强 Robot vision navigation technology based on feature recognition
CN106041931B (en) * 2016-06-30 2018-03-13 广东工业大学 A kind of robot cooperated anticollision method for optimizing route of the more AGV of more space with obstacle
CN106325280B (en) * 2016-10-20 2019-05-31 上海物景智能科技有限公司 A kind of multirobot collision-proof method and system
JP2020004017A (en) * 2018-06-27 2020-01-09 アイシン・エィ・ダブリュ株式会社 Image data transmission device and image data transmission program
CN109167990A (en) * 2018-08-14 2019-01-08 上海常仁信息科技有限公司 Real-time volume optical projection system based on robot
CN110162035B (en) * 2019-03-21 2020-09-18 中山大学 Cooperative motion method of cluster robot in scene with obstacle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324196A (en) * 2013-06-17 2013-09-25 南京邮电大学 Multi-robot path planning and coordination collision prevention method based on fuzzy logic
CN204819543U (en) * 2015-06-24 2015-12-02 燕山大学 Centralized control formula multirobot motion control system
CN106527432A (en) * 2016-11-04 2017-03-22 浙江大学 Indoor mobile robot cooperative system based on fuzzy algorithm and two-dimensional code self correction
CN107168337A (en) * 2017-07-04 2017-09-15 武汉视览科技有限公司 A kind of mobile robot path planning and dispatching method of view-based access control model identification
CN108303972A (en) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 The exchange method and device of mobile robot
US20190163196A1 (en) * 2017-11-28 2019-05-30 Postmates Inc. Light Projection System
US20190384309A1 (en) * 2018-06-18 2019-12-19 Zoox, Inc. Occlusion aware planning
US20200009734A1 (en) * 2019-06-18 2020-01-09 Lg Electronics Inc. Robot and operating method thereof
CN111844038A (en) * 2020-07-23 2020-10-30 炬星科技(深圳)有限公司 Robot motion information identification method, obstacle avoidance robot and obstacle avoidance system

Also Published As

Publication number Publication date
CN111844038B (en) 2022-01-07
CN111844038A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US11016493B2 (en) Planning robot stopping points to avoid collisions
US20190286145A1 (en) Method and Apparatus for Dynamic Obstacle Avoidance by Mobile Robots
JP7178061B2 (en) Human interaction automatic guided vehicle
KR102118278B1 (en) Coordinating multiple agents under sparse networking
US10725471B2 (en) Virtual line-following and retrofit method for autonomous vehicles
US9116521B2 (en) Autonomous moving device and control method thereof
US20090148034A1 (en) Mobile robot
CN108227719B (en) Mobile robot in-place precision control method, system, medium and equipment
CN108290292B (en) Display of variable guard area
US11241790B2 (en) Autonomous moving body and control program for autonomous moving body
CN103884330A (en) Information processing method, mobile electronic device, guidance device, and server
US20190381662A1 (en) Autonomous moving body and control program for autonomous moving body
US11513525B2 (en) Server and method for controlling laser irradiation of movement path of robot, and robot that moves based thereon
JP7489463B2 (en) Autonomous mobile robot linkage system and autonomous mobile robot
WO2022017296A1 (en) Robot motion information recognition method, obstacle avoidance method, robot capable of obstacle avoidance, and obstacle avoidance system
KR20220134033A (en) Point cloud feature-based obstacle filtering system
CN111857114A (en) Robot formation moving method, system, equipment and storage medium
JP2011141663A (en) Automated guided vehicle and travel control method for the same
Mišeikis et al. Multi 3D camera mapping for predictive and reflexive robot manipulator trajectory estimation
Kenk et al. Human-aware Robot Navigation in Logistics Warehouses.
CN109211260A (en) The driving path method and device for planning of intelligent vehicle, intelligent vehicle
US11537137B2 (en) Marker for space recognition, method of moving and lining up robot based on space recognition and robot of implementing thereof
US20220382287A1 (en) Methods and apparatus for coordinating autonomous vehicles using machine learning
JP2020052816A (en) Control unit
JP7077531B2 (en) Driving support device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21847137

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 14.06.2023.)

122 Ep: pct application non-entry in european phase

Ref document number: 21847137

Country of ref document: EP

Kind code of ref document: A1