WO2018176668A1 - 机器人的避障控制系统、方法、机器人及存储介质 - Google Patents

机器人的避障控制系统、方法、机器人及存储介质 Download PDF

Info

Publication number
WO2018176668A1
WO2018176668A1 PCT/CN2017/091368 CN2017091368W WO2018176668A1 WO 2018176668 A1 WO2018176668 A1 WO 2018176668A1 CN 2017091368 W CN2017091368 W CN 2017091368W WO 2018176668 A1 WO2018176668 A1 WO 2018176668A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
obstacle
model
distance
obstacle avoidance
Prior art date
Application number
PCT/CN2017/091368
Other languages
English (en)
French (fr)
Inventor
周涛涛
周宝
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to JP2018533757A priority Critical patent/JP6716178B2/ja
Priority to EP17897209.7A priority patent/EP3410246B1/en
Priority to US16/084,231 priority patent/US11059174B2/en
Priority to KR1020187018065A priority patent/KR102170928B1/ko
Priority to SG11201809892QA priority patent/SG11201809892QA/en
Priority to AU2017404562A priority patent/AU2017404562B2/en
Publication of WO2018176668A1 publication Critical patent/WO2018176668A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0255Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0891Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to an obstacle avoidance control system, method, robot, and storage medium for a robot.
  • autonomous mobile robots can be widely used in many scenes, such as guiding the exhibition hall, leading visitors from one exhibition area to another; the service of the restaurant, actively welcome guests, and lead guests to vacant meals. ; guidance and patrol work in public places, moving along the route set by the program, someone needs help to stop answering questions and so on. In these scenarios, how to avoid the obstacles that the robot collides with in the environment is an important technical problem.
  • autonomous mobile robots rely on their own sensors to locate and avoid obstacles.
  • the industry's usual obstacle avoidance scheme is to install proximity sensors (such as ultrasonic, infrared, laser, etc.) on the robot, if the robot detects a certain distance from the obstacle. For distances (such as 10cm), obstacle avoidance is performed.
  • the existing obstacle avoidance scheme has the following disadvantages: first, the obstacle can only be detected on the plane of the height of the sensor. For the case of a four-legged table, if the height of the sensor is 30 cm and the height of the desktop is 60 cm, the sensor cannot An obstacle is detected, which eventually causes the robot to hit the table. Second, the obstacle can only be detected in the direction in which the sensor is installed. If there is no sensor behind the robot, the back will cause an obstacle.
  • the main object of the present invention is to provide a robot obstacle avoidance control system, method, robot and storage medium, which are intended to effectively control robot obstacle avoidance.
  • a first aspect of the present application provides an obstacle avoidance control system for a robot, where the obstacle avoidance control system includes:
  • a determining module configured to acquire current positioning data of the robot in real time or timing, and determine whether there is a current positioning position from the current positioning position to the target position path according to the current positioning data and the predetermined position data of each obstacle in the moving area.
  • a calculation module configured to calculate a robot according to the acquired positioning data, a predetermined 3D model of the robot, and a predetermined 3D model of the obstacle if the distance from the current positioning position is less than the preset distance The shortest distance of the obstacle;
  • the control module is configured to calculate a direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
  • the second aspect of the present application further provides a robot obstacle avoidance method, the method comprising the following steps:
  • A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
  • the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
  • A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
  • a third aspect of the present application provides a robot including a processor and a memory, wherein the memory stores an obstacle avoidance control system of the robot, and the obstacle avoidance control system of the robot can be executed by the processor to implement the following steps:
  • A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
  • the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
  • A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
  • a fourth aspect of the present application provides a computer readable storage medium having an obstacle avoidance control system of a robot stored thereon, the obstacle avoidance control system of the robot being executable by at least one processor to implement the following steps:
  • A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
  • the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
  • A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
  • the obstacle avoidance control system, method, robot and storage medium of the robot proposed by the present invention when the obstacle is detected by the current positioning data of the robot and the distance from the current positioning position is less than the preset distance, according to the current positioning data of the robot, the advance
  • the determined 3D model of the robot and the predetermined 3D model of the obstacle calculate the shortest distance between the robot and the obstacle in the three-dimensional space, and calculate the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
  • FIG. 1 is a schematic flow chart of an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention
  • FIG. 2a is a schematic diagram showing the fan-shaped equalization of an obstacle 3D model in an embodiment of the robot obstacle avoidance method according to the present invention
  • 2b is a schematic diagram of a sector model portion labeled k in an embodiment of a robot obstacle avoidance method according to the present invention
  • 3a is a schematic diagram of a 3D model of a robot and an obstacle in an embodiment of a robot obstacle avoidance method according to the present invention
  • FIG. 3b is a schematic diagram showing fan-shaped equalization of a cubic obstacle model in an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention
  • 3c is a schematic diagram of screening a model part in an embodiment of a robot obstacle avoidance method according to the present invention.
  • 3d is a schematic diagram of calculating a shortest distance vector in an embodiment of a robot obstacle avoidance method according to the present invention
  • FIG. 4 is a schematic diagram of determining an effective occlusion region in an embodiment of a robot obstacle avoidance method according to the present invention
  • FIG. 5 is a schematic diagram of an operating environment of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
  • FIG. 6 is a functional block diagram of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
  • the invention provides a robot obstacle avoidance method.
  • FIG. 1 is a schematic flow chart of an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention.
  • the robot obstacle avoidance method comprises:
  • Step S10 The obstacle avoidance control system of the robot acquires current positioning data of the robot (for example, position, posture, etc. in the room) in real time or timing (for example, every 2 seconds), and according to current positioning data and each predetermined moving area
  • the position data of the obstacle determines whether there is an obstacle in the current positioning position to the target position path that is less than the preset distance from the current positioning position.
  • the robot's own sensor can be used to locate and determine the distance from each obstacle in the predetermined moving area.
  • a proximity sensor for example, ultrasonic, infrared, laser, etc.
  • Step S20 If there is an obstacle whose distance from the current positioning position is less than the preset distance, calculate the robot and the obstacle according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. The shortest distance of the object.
  • the path along the target position is continuously moved and detected in real time or time.
  • the distance between the robot and the obstacles in the moving area If it is determined that there is an obstacle whose distance from the current positioning position is less than the preset distance, the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle.
  • the distance is used to determine whether the obstacle collides with the obstacle when moving along the target position path in the three-dimensional space, thereby realizing not only the obstacle can be detected at the height plane of the robot sensor but also the three-dimensional space can be detected.
  • the potential obstacles in the three-dimensional space can detect potential obstacles in all directions in the three-dimensional space in the direction in which the robot is mounted with the sensor and the other direction in which the robot is not mounted.
  • the 3D model of the predetermined robot and the 3D model of each obstacle in the moving area may be pre-stored in the storage unit of the robot, or may be accessed by the robot through the wireless communication unit to access the Internet of Things system server. Make a limit.
  • Step S30 Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid all directions in the three-dimensional space.
  • the model calculates the shortest distance between the robot and the obstacle in three-dimensional space, and calculates the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
  • step S20 includes:
  • Step S201 Pre-processing a 3D model of the predetermined robot and a 3D model of the obstacle.
  • Step S202 Calculate the shortest distance between the robot and the obstacle by using the predetermined distance calculation rule for the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data.
  • the 3D model of the robot and the obstacle can be pre-processed, for example, converted into a convex body, so that the shortest distance can be calculated more accurately and quickly.
  • the robot 3D model preprocessing in step S201 includes: for each joint of the robot, directly using a predetermined algorithm (for example, QuickHull fast convex hull algorithm) to find the smallest convex polyhedron surrounding each joint, so as to The robot non-convex model is transformed into a convex model.
  • a predetermined algorithm for example, QuickHull fast convex hull algorithm
  • the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
  • the first one constructing a convex bounding box of non-convex polyhedrons to make it Changed to convex body for collision detection
  • second convex decomposition of non-convex polyhedron, transforming non-convex model into multiple convex bodies for collision detection
  • third equalization of obstacle 3D model sector (ie, fan-shaped section Sub-), then the convex decomposition of the single fan after the equalization, this first fan-shaped equalization and convex decomposition is not only faster than the first two, but also the calculation accuracy is higher.
  • the step of fanning the obstacle 3D model includes:
  • n sector portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
  • the spherical center O makes a line L that coincides with the z-axis in the three-dimensional coordinate system Oxyz, then the xoz plane is the initial sector-shaped equal-division plane, and the xoz plane is ⁇ 1 , and ⁇ 1 divides the obstacle 3D model into two parts. ;
  • X3 select ⁇ 1 around the straight line L by a certain angle ⁇ ( ⁇ represents the adjacent fan-shaped declination) to obtain another new plane ⁇ 2 , and continue to rotate the new plane ⁇ to obtain the plane ⁇ 3 , and rotate m-1 times to obtain the mth Plane ⁇ n ;
  • m planes can divide the spherical bounding box B into 2m parts, and the obstacle 3D model is divided into 2m model parts.
  • step of performing convex decomposition on the evenly divided single sectors includes:
  • the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
  • the predetermined distance calculation rule includes:
  • each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
  • the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
  • GJK algorithm a predetermined distance calculation algorithm
  • FIG. 2a is a schematic diagram showing the fan-shaped equalization of the obstacle 3D model in an embodiment of the robot obstacle avoidance method.
  • 2b is a schematic diagram of a sector model portion labeled k in an embodiment of the robot obstacle avoidance method of the present invention.
  • the predetermined screening algorithm includes:
  • the n model parts obtained by fanning the obstacle 3D model are respectively used as the n nodes of the obstacle, and the key-value key values are respectively established with respect to the initial sector-shaped equal plane (ie, the xoz plane).
  • Hash(i) i*(360°/n)
  • Hash(i) represents the off-angle of the sector model portion labeled i and the X-axis positive axis of the obstacle coordinate system
  • T i A 0 A 1 A 2 ...A i-1 A i
  • the real-time updated value Q i (x, y, z) of the origin coordinate of each joint local coordinate system during the motion of the robot is calculated by T i , and the declination ⁇ of the joint in the obstacle coordinate system can be further obtained:
  • Q i (x, y, z) represents the coordinates of the robot joint in the robot coordinate system
  • T r represents the transformation matrix of the robot coordinate system transformed into the obstacle coordinate system (a 4*4 matrix, robot coordinate system and obstacles) The object coordinate system has been determined, the matrix can be directly calculated), then the coordinates Q i (x t , y t , z t ) of the robot joint in the obstacle coordinate system are:
  • the declination of the joint in the obstacle coordinate system is ⁇
  • the corresponding label can be calculated according to the hash function Hash(i) representing the declination mapping relationship.
  • the fan model part screens out the part of the model to be distance calculated. For example, if the calculated sector model portion has the label k, the sector model in the range of [k-M, k+N] can be selected for the shortest distance calculation. Where M and N are a preset value, and a plurality of sector model parts near the sector model part with the label k are selected as the model part to be subjected to the shortest distance calculation.
  • FIG. 3a is a schematic diagram of a 3D model of a robot and an obstacle in an embodiment of the robot obstacle avoidance method according to the present invention.
  • the robot adopts a robot with only the movement of the chassis and no other joints such as an arm.
  • the 3D model of the robot adopts a 3D model of a robot with a height of 1500 mm and a motion chassis radius of 320 mm, and the obstacle 3D model adopts a simple cube.
  • the model has a size of 2200mm*2200mm*1000mm.
  • the current coordinates of the robot in the obstacle model coordinate system are (1800, -100).
  • FIG. 3b is a schematic diagram showing fan-shaped equalization of a cubic obstacle model according to an embodiment of the robot obstacle avoidance method.
  • the pre-processing is mainly to divide the obstacle model into fan-shaped, as shown in Fig. 3b, the obstacle model is divided into 32 parts by fan shape, and the model of fan-shaped mean is inversed from the X-axis.
  • FIG. 3c is a schematic diagram of screening a model part in an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention.
  • the robot used in this embodiment only has the movement of the chassis, and there are no other sports joints such as arms, so the chassis posture represents the overall posture of the robot, and the current robot position is (1800, -100).
  • FIG. 3d is a schematic diagram of calculating a shortest distance vector in an embodiment of a robot obstacle avoidance method according to the present invention.
  • the range of the obstacle block (1, 2, 31, 32) has been reduced, and the shortest distance between the robot and the obstacle is directly calculated by the GJK algorithm, as shown in Fig. 3d, respectively.
  • step S30 includes:
  • the obstacle avoidance control system of the robot analyzes whether the obstacle avoidance is needed according to the calculated shortest distance; if the calculated shortest distance is greater than the preset distance threshold, it is determined that the obstacle avoidance is not required, or if the calculated shortest distance is less than or equal to the preset distance threshold, Then determine the need to avoid obstacles. If it is determined that obstacle avoidance is needed, the obstacle avoidance control system of the robot calculates the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and controls the robot according to the calculated motion direction. Athletic posture.
  • the step of calculating the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle includes:
  • a first preset type obstacle avoidance parameter for example, a virtual repulsive force
  • a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
  • FIG. 4 is a schematic diagram of determining an effective occlusion region in an embodiment of the robot obstacle avoidance method according to the present invention.
  • the predetermined projection analysis rule is:
  • the P 1 position point of the coordinate system plane represent the current position of the robot
  • the P 2 position point represents the position of the target point, that is, the target position
  • the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
  • the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
  • the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
  • the first preset type obstacle avoidance parameter is a virtual repulsion
  • the second preset type obstacle avoidance parameter is a virtual gravity
  • the first preset is determined according to the calculated shortest distance and the area of the effective occlusion area.
  • the obstacle avoidance parameter is determined according to the distance between the target position and the current positioning position of the robot
  • the second preset type obstacle avoidance parameter is determined according to the first preset type obstacle avoidance parameter and the second preset type obstacle avoidance parameter.
  • the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
  • the first calculation rule is:
  • the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
  • the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
  • the relationship is:
  • k r b r represents a preset virtual repulsion coefficient
  • s 0 represents a preset effective occlusion area area threshold, s 0 >0
  • d 0 represents a preset distance threshold, d 0 >0
  • virtual repulsion direction (ie Direction) is the same as the shortest distance.
  • the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
  • the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
  • k t represents the preset gravitational coefficient
  • d t represents the distance between the target position and the current positioning position of the robot
  • the virtual gravitational direction (ie Direction) towards the target position ie Direction
  • the invention further provides an obstacle avoidance control system for a robot.
  • FIG. 5 is a schematic diagram of an operating environment of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
  • the obstacle avoidance control system 10 is installed and operated in the robot 1.
  • the robot 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
  • Figure 1 shows only the robot 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
  • the memory 11 includes a memory and at least one type of readable storage medium.
  • the memory provides a cache for the operation of the robot 1;
  • the readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the readable storage medium may be an internal storage unit of the robot 1, such as a hard disk or memory of the robot 1.
  • the readable storage medium may also be an external storage device of the robot 1, such as a plug-in hard disk equipped on the robot 1, a smart memory card (SMC), and security.
  • the readable storage medium of the memory 11 is generally used to store application software and various types of data installed in the robot 1, such as program codes of the obstacle avoidance control system 10.
  • the memory 11 can also be used to temporarily store data that has been output or is about to be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11.
  • the processor 12 executes the obstacle avoidance control system 10 to implement any of the steps of the above-described robot obstacle avoidance method.
  • the display 13 in some embodiments may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
  • the display 13 is used to display information processed in the robot 1 and a user interface for displaying visualization, such as an application menu boundary Face, application icon interface, etc.
  • the components 11-13 of the robot 1 communicate with one another via a system bus.
  • FIG. 6 is a functional block diagram of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
  • the obstacle avoidance control system 10 can be divided into a determination module 01, a calculation module 02, and a control module 03.
  • module refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the obstacle avoidance control system 10 in the robot 1.
  • the processor 12 executes the computer program instruction segments of the modules of the obstacle avoidance control system 10
  • any step of the above-described robot obstacle avoidance method can be realized based on the operations and functions that can be realized by the respective computer program instruction segments.
  • the following description will specifically describe the operations and functions implemented by the determining module 01, the computing module 02, and the control module 03.
  • the determining module 01 is configured to acquire current positioning data of the robot (for example, position, posture, etc. in the room) in real time or timing (for example, every 2 seconds), and according to current positioning data and predetermined obstacles in the moving area.
  • the position data of the object determines whether there is an obstacle in the current positioning position to the target position path that is less than the preset distance from the current positioning position.
  • the robot's own sensor can be used to locate and determine the distance from each obstacle in the predetermined moving area.
  • a proximity sensor for example, ultrasonic, infrared, laser, etc.
  • the calculating module 02 is configured to calculate, according to the acquired positioning data, a predetermined 3D model of the robot, and a predetermined 3D model of the obstacle, if there is an obstacle whose distance from the current positioning position is less than the preset distance. The shortest distance between the robot and the obstacle.
  • the path along the target position is continuously moved and detected in real time or time.
  • the distance between the robot and the obstacles in the moving area If it is determined that there is an obstacle whose distance from the current positioning position is less than the preset distance, the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle.
  • the distance is used to determine whether the obstacle collides with the obstacle when moving along the target position path in the three-dimensional space, thereby realizing not only the obstacle can be detected at the height plane of the robot sensor but also the three-dimensional space can be detected.
  • the potential obstacles in the three-dimensional space can detect potential obstacles in all directions in the three-dimensional space in the direction in which the robot is mounted with the sensor and the other direction in which the robot is not mounted.
  • the 3D model of the predetermined robot and the 3D model of each obstacle in the moving area may be pre-stored in the storage unit of the robot, or may be accessed by the robot through the wireless communication unit to access the Internet of Things system server. Make a limit.
  • the control module 03 is configured to calculate a direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid Potential obstacles in all directions in the three-dimensional space, effectively controlling the obstacle avoidance when the robot moves along the path of the target position.
  • the model calculates the shortest distance between the robot and the obstacle in three-dimensional space, and calculates the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
  • calculation module 02 is further configured to:
  • Pre-determining the 3D model of the predetermined robot and the 3D model of the obstacle calculating the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data using a predetermined distance calculation
  • the rule calculates the shortest distance between the robot and the obstacle.
  • the 3D model of the robot and the obstacle can be pre-processed, for example, converted into a convex body, so that the shortest distance can be calculated more accurately and quickly.
  • the calculation module 02 is further configured to: for each joint of the robot, directly use a predetermined algorithm (for example, a QuickHull fast convex hull algorithm) to find a minimum convex polyhedron surrounding each joint to The non-convex model is transformed into a convex model.
  • a predetermined algorithm for example, a QuickHull fast convex hull algorithm
  • the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
  • the first one constructing a convex bounding box of a non-convex polyhedron to convert it into a convex body for collision detection
  • the second performing convex decomposition on a non-convex polyhedron to make a non-convex model Converted into multiple convex bodies for collision detection
  • third equalizes the fan-shaped 3D model of the obstacle (ie, fan-shaped), and then performs convex decomposition on the evenly divided single sector.
  • This first fan-shaped equalization and convex decomposition Compared with the first two methods, the calculation method is faster and the calculation accuracy is higher.
  • calculation module 02 is further configured to:
  • a spherical bounding box of the obstacle-shaped 3D model to be fan-shaped, finding a spherical center of the spherical bounding box; setting an initial fan-shaped equalizing plane passing through the spherical center, and pressing the initial fan-shaped equalizing plane according to a preset
  • the rotation angle is rotated a plurality of times around the center of the sphere to divide the spherical bounding box into n sector-shaped portions; the n sector-shaped portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
  • the spherical center O makes a line L that coincides with the z-axis in the three-dimensional coordinate system Oxyz, then the xoz plane is the initial sector-shaped equal-division plane, and the xoz plane is ⁇ 1 , and ⁇ 1 divides the obstacle 3D model into two parts. ;
  • X3 select ⁇ 1 around the straight line L by a certain angle ⁇ ( ⁇ represents the adjacent fan-shaped declination) to obtain another new plane ⁇ 2 , and continue to rotate the new plane ⁇ to obtain the plane ⁇ 3 , and rotate m-1 times to obtain the mth Plane ⁇ n ;
  • m planes can divide the spherical bounding box B into 2m parts, and the obstacle 3D model is divided into 2m model parts.
  • calculation module 02 is further configured to:
  • the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
  • the predetermined distance calculation rule includes:
  • each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
  • the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
  • GJK algorithm a predetermined distance calculation algorithm
  • the predetermined screening algorithm comprises:
  • the n model parts obtained by fanning the obstacle 3D model are respectively used as the n nodes of the obstacle, and the key-value key values are respectively established with respect to the initial sector-shaped equal plane (ie, the xoz plane).
  • Hash(i) i*(360°/n)
  • Hash(i) represents the off-angle of the sector model portion labeled i and the X-axis positive axis of the obstacle coordinate system
  • T i A 0 A 1 A 2 ...A i-1 A i
  • the real-time updated value Q i (x, y, z) of the origin coordinate of each joint local coordinate system during the motion of the robot is calculated by T i , and the declination ⁇ of the joint in the obstacle coordinate system can be further obtained:
  • Q i (x, y, z) represents the coordinates of the robot joint in the robot coordinate system
  • T r represents the transformation matrix of the robot coordinate system transformed into the obstacle coordinate system (a 4*4 matrix, robot coordinate system and obstacles) The object coordinate system has been determined, the matrix can be directly calculated), then the coordinates Q i (x t , y t , z t ) of the robot joint in the obstacle coordinate system are:
  • the declination of the joint in the obstacle coordinate system is ⁇
  • the corresponding label can be calculated according to the hash function Hash(i) representing the declination mapping relationship.
  • the fan model part screens out the part of the model to be distance calculated. For example, if the calculated sector model portion has the label k, the sector model in the range of [k-M, k+N] can be selected for the shortest distance calculation. Where M and N are a preset value, and a plurality of sector model parts near the sector model part with the label k are selected as the model part to be subjected to the shortest distance calculation.
  • the robot adopts a robot with only the movement of the chassis and no other joints such as an arm.
  • the 3D model of the robot uses a 3D model of a robot with a height of 1500 mm and a motion chassis radius of 320 mm, and an obstacle.
  • the 3D model uses a simple cube model with dimensions of 2200mm*2200mm*1000mm.
  • the current coordinates of the robot in the obstacle model coordinate system are (1800, -100).
  • the robot used in this embodiment only has the movement of the chassis, and there are no other sports joints such as arms, so the chassis posture represents the overall posture of the robot, and the current robot position is (1800, -100).
  • the whole gets 32, so the corresponding sector block number 32 to be distance calculated, that is to say the robot is closest to the obstacle block numbered 32.
  • the obstacle block range is [31,34], number More than 32 need to do a simple conversion, 33 is converted to an obstacle block numbered 1 and 34 is converted to an obstacle block numbered 2; as shown in Figure 3c, the final selection number is 31, 32, 1, 2 The obstacle block is calculated for the shortest distance.
  • the range of the obstacle block (1, 2, 31, 32) has been reduced, and the shortest distance between the robot and the obstacle is directly calculated by the GJK algorithm, as shown in Fig. 3d, respectively.
  • control module 03 is further configured to:
  • the direction of the current robot should be calculated according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and the motion posture of the robot is controlled according to the calculated motion direction.
  • control module 03 is further configured to:
  • a first preset type obstacle avoidance parameter for example, a virtual repulsive force
  • a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
  • the predetermined projection analysis rule is:
  • the P 1 position point of the coordinate system plane represent the current position of the robot
  • the P 2 position point represents the position of the target point, that is, the target position
  • the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
  • the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
  • the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
  • the control module 03 is further configured to:
  • the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
  • the first calculation rule is:
  • the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
  • the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
  • the relationship is:
  • k r b r represents a preset virtual repulsion coefficient
  • s 0 represents a preset effective occlusion area area threshold, s 0 >0
  • d 0 represents a preset distance threshold, d 0 >0
  • virtual repulsion direction (ie Direction) is the same as the shortest distance.
  • the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
  • the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
  • the virtual gravitational force of the target position to the robot where k t represents the preset gravitational coefficient, d t represents the distance between the target position and the current positioning position of the robot, and the virtual repulsive force direction (ie Direction) towards the target position.
  • the present invention also provides a computer readable storage medium.
  • the computer readable storage medium stores an obstacle avoidance control system of the robot, and the obstacle avoidance control system of the robot can be executed by at least one processor to:
  • Step S10 Obtain current positioning data of the robot in real time or timing, and determine whether the distance from the current positioning position to the current positioning position is less than the current positioning position according to the current positioning data and the position data of each obstacle in the predetermined moving area. An obstacle of a preset distance;
  • Step S20 If there is an obstacle whose distance from the current positioning position is less than the preset distance, calculate the robot and the obstacle according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. The shortest distance of the object;
  • Step S30 Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
  • step S20 includes:
  • Step S201 Pre-processing a 3D model of the predetermined robot and a 3D model of the obstacle.
  • Step S202 Calculate the shortest distance between the robot and the obstacle by using the predetermined distance calculation rule for the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data.
  • the robot 3D model preprocessing in step S201 includes: for each joint of the robot, directly using a predetermined algorithm (for example, QuickHull fast convex hull algorithm) to find the smallest convex polyhedron surrounding each joint, so as to The robot non-convex model is transformed into a convex model.
  • a predetermined algorithm for example, QuickHull fast convex hull algorithm
  • the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
  • the first one constructing a convex bounding box of a non-convex polyhedron to convert it into a convex body for collision detection
  • the second performing convex decomposition on a non-convex polyhedron to make a non-convex model Converted into multiple convex bodies for collision detection
  • third equalizes the fan-shaped 3D model of the obstacle (ie, fan-shaped), and then performs convex decomposition on the evenly divided single sector.
  • This first fan-shaped equalization and convex decomposition Compared with the first two methods, the calculation method is faster and the calculation accuracy is higher.
  • the step of fanning the obstacle 3D model includes:
  • n sector portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
  • step of performing convex decomposition on the evenly divided single sectors includes:
  • the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
  • the predetermined distance calculation rule includes:
  • each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
  • the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
  • GJK algorithm a predetermined distance calculation algorithm
  • step S30 includes:
  • the obstacle avoidance control system of the robot analyzes whether the obstacle avoidance is needed according to the calculated shortest distance; if the calculated shortest distance is greater than the preset distance threshold, it is determined that the obstacle avoidance is not required, or if the calculated shortest distance is less than or equal to the preset distance threshold, Then determine the need to avoid obstacles. If it is determined that obstacle avoidance is needed, the obstacle avoidance control system of the robot calculates the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and controls the robot according to the calculated motion direction. Athletic posture.
  • the step of calculating the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle includes:
  • a first preset type obstacle avoidance parameter for example, a virtual repulsive force
  • a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
  • the predetermined projection analysis rule is:
  • the P 1 position point of the coordinate system plane represent the current position of the robot
  • the P 2 position point represents the position of the target point, that is, the target position
  • the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
  • the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
  • the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
  • the first preset type obstacle avoidance parameter is a virtual repulsion
  • the second preset type obstacle avoidance parameter is a virtual gravity
  • the first preset is determined according to the calculated shortest distance and the area of the effective occlusion area.
  • the obstacle avoidance parameter is determined according to the distance between the target position and the current positioning position of the robot
  • the second preset type obstacle avoidance parameter is determined according to the first preset type obstacle avoidance parameter and the second preset type obstacle avoidance parameter.
  • the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
  • the first calculation rule is:
  • the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
  • the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
  • the relationship is:
  • k r b r represents a preset virtual repulsion coefficient
  • s 0 represents a preset effective occlusion area area threshold, s 0 >0
  • d 0 represents a preset distance threshold, d 0 >0
  • virtual repulsion direction (ie Direction) is the same as the shortest distance.
  • the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
  • the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
  • the virtual gravitational force of the target position to the robot where k t represents the preset gravitational coefficient, d t represents the distance between the target position and the current positioning position of the robot, and the virtual repulsive force direction (ie Direction) towards the target position.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Acoustics & Sound (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种机器人的避障控制系统(10)、方法、机器人(1)及存储介质,方法包括:获取机器人(1)当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物(S10);若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人(1)的3D模型及预先确定的障碍物的3D模型,计算出机器人(1)与障碍物的最短距离(S20);根据获取的定位数据、计算的最短距离及障碍物的3D模型,计算出当前机器人(1)应运动的方向,根据计算出的运动方向控制机器人(1)的运动姿态,以避开障碍物(S30)。能有效地控制机器人(1)避障。

Description

机器人的避障控制系统、方法、机器人及存储介质
优先权申明
本申请基于巴黎公约申明享有2017年3月27日递交的申请号为CN201710186581.6、名称为“机器人的避障控制系统及方法”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本发明涉及计算机技术领域,尤其涉及一种机器人的避障控制系统、方法、机器人及存储介质。
背景技术
目前,自主移动机器人能够广泛应用于许多场景,比如担任展览馆的导览工作,带领参观者从一个展区介绍到另一个展区;餐厅的服务工作,主动欢迎客人,并带领客人到空位上点餐;公共场所的引导、巡逻工作,沿着程序设定的路线移动,有人需要帮助停下回答提问等等。在这些场景下,如何避免机器人在运动中碰撞环境中的障碍物,是一个重要的技术问题。目前,自主移动机器人是依靠自身的传感器来定位及避障,业界通常的避障方案是:在机器人上安装接近传感器(例如,超声波、红外、激光等传感器),如果机器人检测到距障碍物一定距离(比如10cm),则进行避障。
现有的避障方案有以下缺点:第一、只能在传感器所在高度平面检测到障碍物,对于四腿桌子等情况,假设传感器所在高度为30厘米,而桌面高度为60厘米,那么传感器无法检测到障碍物,最终会导致机器人撞上桌面;第二、只能在安装有传感器的方向检测到障碍物,假如机器人后面没有传感器,则后退会导致撞上障碍物。
因此,如何在传感器无法全面侦测的情况下,有效控制机器人避障已经成为一个亟待解决的技术问题。
发明内容
本发明的主要目的在于提供一种机器人的避障控制系统、方法、机器人及存储介质,旨在有效控制机器人避障。
为实现上述目的,本申请第一方面提供一种机器人的避障控制系统,所述避障控制系统包括:
确定模块,用于实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
计算模块,用于若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
控制模块,用于根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
本申请第二方面还提供一种机器人避障方法,所述方法包括以下步骤:
A1、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
A2、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物 的最短距离;
A3、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
本申请第三方面提供一种机器人,包括处理器及存储器,该存储器上存储有机器人的避障控制系统,该机器人的避障控制系统可被该处理器执行,以实现以下步骤:
A1、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
A2、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
A3、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
本申请第四方面提供一种计算机可读存储介质,该计算机可读存储介质上存储有机器人的避障控制系统,该机器人的避障控制系统可被至少一处理器执行,以实现以下步骤:
A1、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
A2、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
A3、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
本发明提出的机器人的避障控制系统、方法、机器人及存储介质,通过机器人当前的定位数据检测到有离当前定位位置的距离小于预设距离的障碍物时,根据机器人当前的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物在三维空间的最短距离,并计算出当前机器人应运动的方向,以控制机器人的运动姿态。由于能通过机器人与障碍物在三维空间的最短距离来控制机器人的运动方向,从而实现检测并避开机器人在三维空间中各个方向的障碍物,有效地控制机器人避障。
附图说明
图1为本发明机器人避障方法一实施例的流程示意图;
图2a为本发明机器人避障方法一实施例中对障碍物3D模型扇形均分的示意图;
图2b为本发明机器人避障方法一实施例中标号为k的扇形模型部分的示意图;
图3a为本发明机器人避障方法一实施例中机器人与障碍物的3D模型示意图;
图3b为本发明机器人避障方法一实施例中对立方体障碍物模型进行扇形均分的示意图;
图3c为本发明机器人避障方法一实施例中对模型部分的筛选示意图;
图3d为本发明机器人避障方法一实施例中计算最短距离向量的示意图;
图4为本发明机器人避障方法一实施例中确定有效遮挡区域的示意图;
图5为本发明避障控制系统10较佳实施例的运行环境示意图;
图6为本发明避障控制系统10较佳实施例的功能模块图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
为了使本发明所要解决的技术问题、技术方案及有益效果更加清楚、明白,以下结合 附图和实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种机器人避障方法。
参照图1,图1为本发明机器人避障方法一实施例的流程示意图。
在一实施例中,该机器人避障方法包括:
步骤S10、机器人的避障控制系统实时或者定时(例如,每隔2秒)获取机器人当前的定位数据(例如在室内的位置、姿态等),并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物。例如,可依靠机器人自身的传感器来定位并判断与预先确定的移动区域内各个障碍物的距离,如可在机器人上安装接近传感器(例如,超声波、红外、激光等传感器)来判断机器人当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物。
步骤S20、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离。
在检测到机器人的当前定位位置与预先确定的移动区域内各个障碍物的距离之后,若判断没有障碍物离当前定位位置的距离小于预设距离,则继续沿目标位置路径移动并实时或者定时检测机器人与移动区域内各个障碍物的距离。若判断有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型计算出机器人与该障碍物的最短距离,以利用该最短距离来判断在三维空间中机器人沿目标位置路径移动时是否会碰撞到该障碍物,从而实现不仅能在机器人的传感器所在高度平面检测到障碍物,还能检测到三维空间中潜在的障碍物,以在机器人安装有传感器的方向和机器人没有安装传感器的其他方向上均能检测到三维空间中各个方向潜在的障碍物。其中,所述预先确定的机器人的3D模型及移动区域内各个障碍物的3D模型可以预先存储于机器人的存储单元中,或者,可以由机器人通过无线通信单元访问物联网系统服务器获取,在此不做限定。
步骤S30、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开三维空间中各个方向潜在的障碍物,有效控制机器人在沿目标位置路径移动时的避障。
本实施例通过机器人当前的定位数据检测到有离当前定位位置的距离小于预设距离的障碍物时,根据机器人当前的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物在三维空间的最短距离,并计算出当前机器人应运动的方向,以控制机器人的运动姿态。由于能通过机器人与障碍物在三维空间的最短距离来控制机器人的运动方向,从而实现检测并避开机器人在三维空间中各个方向的障碍物,有效地控制机器人避障。
进一步地,所述步骤S20包括:
步骤S201、对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理。
步骤S202、对获取的定位数据、预处理后的机器人3D模型数据及预处理后的障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与障碍物的最短距离。
例如,由于机器人和障碍物一般为非凸体,因此,可对机器人和障碍物的3D模型进行预处理如转换为凸体,以便后续更加准确和快速地计算出最短距离。
进一步地,所述步骤S201中的机器人3D模型预处理包括:针对机器人的每一个关节,直接利用预先确定的算法(例如,QuickHull快速凸包算法)找出包围各个关节的最小凸多面体,以将机器人非凸模型转换为凸体模型。通过上述凸处理的机器人3D模型在后续计算最短距离向量时能有效提高计算速度和计算精度。
障碍物3D模型预处理的方式包括三种:第一种、构建非凸多面体的凸包围盒使之转 换为凸体进行碰撞检测;第二种、对非凸多面体进行凸分解,使非凸模型转换为多个凸体进行碰撞检测;第三种、对障碍物3D模型扇形均分(即扇形剖分),然后对均分后的单个扇形进行凸分解,这种先扇形均分再凸分解的方式相对于前两种不仅计算速度更快,而且计算精度更高。
进一步地,所述对障碍物3D模型扇形均分的步骤包括:
建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;
设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
例如,在一种具体实施方式中,可包括如下步骤:
X1、建立要扇形均分的障碍物3D模型M的球形包围盒B,找到球形包围盒B的球心O,然后在球心O处建立三维坐标系Oxyz;
X2、过球心O做一条与三维坐标系Oxyz中z轴重合的直线L,则xoz平面即为初始扇形均分平面,设xoz平面为α1,α1将障碍物3D模型分为2部分;
X3、将α1绕直线L选择一定角度β(β代表相邻扇形偏角)得到另外一个新平面α2,将新平面继续旋转β可以得到平面α3,旋转m-1次可以得到第m个平面αn
X4、设β=180/m,则m个平面可以把球形包围盒B均分为2m部分,障碍物3D模型即被分为2m个模型部分。通过上述步骤可以完成对任意模型,包括非凸模型的简单剖分,并通过哈希表进行管理剖分好的模型部分。
进一步地,所述对均分后的单个扇形进行凸分解的步骤包括:
采用Delaunay三角剖分算法对障碍物3D模型进行表面三角剖分,产生三角面片(凸片)集合;并针对每一个三角面片构造与之对应的凸块。例如,将厚度为零的三角面片在其平面法向量方向进行预设厚度的拉伸,变为凸块。
进一步地,所述预先确定的距离计算规则包括:
根据机器人当前的定位数据(如室内位置、姿态等)及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;
对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法(例如,GJK算法)计算出机器人与筛选出的模型部分的最短距离,该最短距离即为机器人与障碍物3D模型的最短距离。
进一步地,如图2a、2b所示,图2a为本发明机器人避障方法一实施例中对障碍物3D模型扇形均分的示意图。图2b为本发明机器人避障方法一实施例中标号为k的扇形模型部分的示意图。所述预先确定的筛选算法包括:
Y1、将障碍物3D模型扇形均分后获得的n个模型部分分别作为障碍物的n个节点,建立key-value键值分别是相对于初始扇形均分平面(即xoz平面)的旋转角即偏角和模型几何信息数据的哈希表,以进行模型节点管理;
Y2、对扇形均分获得的各个模型部分进行标号,从1开始进行标号;均分的n个扇形模型部分,相邻扇形偏角为360°/n,根据标号,建立于初始标号为i的扇形模型部分的偏角映射关系,代表所述偏角映射关系的哈希函数为:
Hash(i)=i*(360°/n)
其中,i为标号为i的扇形模型部分,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;
Y3、建立机器人的运动学,根据建立的运动学计算出机器人各个关节的位姿,从建立的哈希表中查询出机器人附近的障碍物扇形区域。如下图2所示;当机器人在运动过程中,通过机器人运动学,运动学方程为:
Ti=A0A1A2…Ai-1Ai
其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵(可以通过机器人各关节的D-H参数确定),A0表示机器人当前位置矩阵(与机器人当前定位数据对应),Ti为第i个关节相对于机器人坐标系的位姿;
通过Ti计算出机器人运动过程中各个关节局部坐标系原点坐标的实时更新值Qi(x,y,z),进一步可以得到关节在障碍物坐标系下的偏角α:
α=f(Qi(x,y,z))
其中,Qi(x,y,z)表示机器人关节在机器人坐标系下的坐标;Tr表示机器人坐标系变换到障碍物坐标系的变换矩阵(为4*4的矩阵,机器人坐标系和障碍物坐标系已确定,该矩阵可以直接计算出来),则机器人关节在障碍物坐标系下的坐标Qi(xt,yt,zt)为:
Qi(xt,yt,zt)=TrQi(x,y,z)
假定障碍物坐标系Z轴正向朝上,遵循右手坐标系,设关节在障碍物坐标系下的偏角为α,则
Figure PCTCN2017091368-appb-000001
向求解三角函数即可得到关节在障碍物坐标系下的偏角为α,获取到偏角α之后,即可根据代表所述偏角映射关系的哈希函数Hash(i)计算得到对应标号的扇形模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。例如,计算得到的扇形模型部分的标号为k,则可选取标号在[k-M,k+N]范围内的扇形模型部分进行最短距离计算。其中M、N为一预设数值,以选取标号为k的扇形模型部分附近的多个扇形模型部分作为待进行最短距离计算的模型部分。
如图3a所示,图3a为本发明机器人避障方法一实施例中机器人与障碍物的3D模型示意图。在一种具体实施方式中,机器人采用只有底盘的运动、没有手臂等其他运动关节的机器人,机器人3D模型采用高为1500mm,运动底盘半径为320mm的机器人3D模型,障碍物3D模型采用一个简单立方体模型,尺寸为2200mm*2200mm*1000mm,在障碍物模型坐标系下机器人当前的坐标为(1800,-100)。
图3b为本发明机器人避障方法一实施例中对立方体障碍物模型进行扇形均分的示意图。对障碍物模型进行预处理中,预处理主要是对障碍物模型进行扇形均分,如图3b所示,障碍物模型被扇形均分为32份,从X轴逆时针对扇形均分的模型部分进行编号,1,2,…,15,16,…,31,32;每一个模型块的夹角为:360/32=11.25度,可以看出,编号1模型块与X轴正向偏角11.25度,编号2模型块与X轴正向偏角11.25*2=22.5度,编号为i的模型块与X轴正向偏角:i*(360/32)。
图3c为本发明机器人避障方法一实施例中对模型部分的筛选示意图。在对模型部分的筛选过程中,因本实施例采用的机器人只有底盘的运动,没有手臂等其他运动关节,所以底盘位姿代表机器人的整体位姿,当前机器人的位置为(1800,-100)(相对于障碍物坐标系下坐标),可以计算出机器人与障碍物坐标系的X轴正轴的偏角为354度;进而计算机器人对应扇形模型部分的标号为354/11.25=31.5,向上取整得到32,所以待进行距离计算的对应扇形块编号32,也就是说机器人离编号为32的障碍物块最近。接下来选取K=32附近的障碍物块,采用GJK计算与机器人之间的最短距离及最短距离点;选取M=1,N=2,则得到障碍物块范围是[31,34],编号超过32的需要做简单转换,33转换为对应编号为1的障碍物块,34转换为对应编号为2的障碍物块;如图3c所示,最终选取编号是31,32,1,2的障碍物块进行最短距离计算。
图3d为本发明机器人避障方法一实施例中计算最短距离向量的示意图。在计算最短距离时,通过上述处理,已缩小障碍物块的范围(1,2,31,32),直接采用GJK算法计算出 机器人与障碍物间的最短距离点,如图3d所示,分别为障碍物上的点(x1,y1,z1)=(1100,-100,-235),机器人上的点(x2,y2,z2)=(1477,-100,-235);则机器人与障碍物之间的最短距离向量
Figure PCTCN2017091368-appb-000002
进一步地,所述步骤S30包括:
机器人的避障控制系统根据计算的最短距离分析是否需要避障;如若计算的最短距离大于预设距离阈值,则确定不需要避障,或者,若计算的最短距离小于或者等于预设距离阈值,则确定需要避障。若确定需要避障,则机器人的避障控制系统根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,并根据计算出的运动方向控制机器人的运动姿态。
进一步地,所述根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向的步骤包括:
将机器人及该障碍物投影到同一坐标系平面中;
根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物的投影相对于机器人当前位置及目标位置有效遮挡区域的面积;
根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数(例如,虚拟斥力),根据目标点位置与机器人当前定位位置的距离确定出第二预设类型避障参数(例如,虚拟引力),根据第一预设类型避障参数及第二预设类型避障参数确定出机器人当前应运动的方向。
进一步地,如图4所示,图4为本发明机器人避障方法一实施例中确定有效遮挡区域的示意图。所述预先确定的投影分析规则为:
设坐标系平面的P1位置点表示机器人所在位置即当前定位位置,P2位置点表示目标点所在位置即目标位置,投影区域P3表示障碍物3D模型在坐标系平面中的投影,并在坐标系平面中连接P1P2,得到一条直线J;
若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
若直线J与投影区域P3的交点个数大于1,则直线J将投影分割为两部分(如图4所示的S1区域和S2区域),在投影区域P3(例如S1或S2区域中)中任意找一点PS,过PS作直线J的垂线,垂线与直线J的交点为PJ,进而得到向量
Figure PCTCN2017091368-appb-000003
计算最短距离的向量
Figure PCTCN2017091368-appb-000004
与向量
Figure PCTCN2017091368-appb-000005
的夹角θ,若θ是锐角,则确定PS点所在区域为有效遮挡区域(例如,图4中有效遮挡投影区域S2),或者,若θ不是锐角,则确定PS点所在区域不是有效遮挡区域。
进一步地,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向的步骤包括:
对计算的最短距离和有效遮挡投影区域的面积,利用第一计算规则计算出作用在机器人上的一个虚拟斥力;
对当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的一个虚拟引力;
计算出该虚拟引力和虚拟斥力的合力方向,所述合力方向即为机器人当前应运动的方向。
进一步地,所述第一计算规则为:
Figure PCTCN2017091368-appb-000006
其中,
Figure PCTCN2017091368-appb-000007
为机器人与障碍物的最短距离的向量,S为有效遮挡区域的面积,
Figure PCTCN2017091368-appb-000008
为障碍物对机器人的虚拟斥力,
Figure PCTCN2017091368-appb-000009
表示最短距离向量和有效遮挡区域的面积转换为障碍物对机器人虚拟排斥力的关系式。该关系式的实现可以有多种方法,在一种可选的实施例中,该关系式为:
Figure PCTCN2017091368-appb-000010
其中,kr br表示预设的虚拟斥力系数,s0表示预设的有效遮挡区域面积阈值,s0>0;d0表示预设的距离阈值,d0>0;虚拟斥力方向(即
Figure PCTCN2017091368-appb-000011
方向)与最短距离方向相同。
依据上述关系式,当机器人与障碍物距离较远,超过设定的距离阈值d0时不进行避障,
Figure PCTCN2017091368-appb-000012
的大小为0;进入避障距离范围内(最短距离小于d0),当有效遮挡区域的面积s比较大,超过设定的值s0时,
Figure PCTCN2017091368-appb-000013
会使
Figure PCTCN2017091368-appb-000014
变大,距离较远时就可以进行避障,提前避障,以绕开较大的障碍物。
进一步地,所述第二计算规则为:
Ft=kt*dt
其中,
Figure PCTCN2017091368-appb-000015
为目标位置对机器人的虚拟引力,kt表示预设的引力系数,dt表示目标位置与机器人当前定位位置的距离,虚拟引力方向(即
Figure PCTCN2017091368-appb-000016
方向)朝向目标位置。
本发明进一步提供一种机器人的避障控制系统。
请参阅图5,是本发明避障控制系统10较佳实施例的运行环境示意图。
在本实施例中,所述的避障控制系统10安装并运行于机器人1中。
该机器人1可包括,但不仅限于,存储器11、处理器12及显示器13。图1仅示出了具有组件11-13的机器人1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
其中,存储器11包括内存及至少一种类型的可读存储介质。内存为机器人1的运行提供缓存;可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述机器人1的内部存储单元,例如该机器人1的硬盘或内存。在另一些实施例中,所述可读存储介质也可以是所述机器人1的外部存储设备,例如所述机器人1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述机器人1的应用软件及各类数据,例如所述避障控制系统10的程序代码等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。
所述处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行所述存储器11中存储的程序代码或处理数据。该处理器12执行所述避障控制系统10,可实现上述机器人避障方法的任一步骤。
所述显示器13在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。所述显示器13用于显示在所述机器人1中处理的信息以及用于显示可视化的用户界面,例如应用菜单界 面、应用图标界面等。所述机器人1的部件11-13通过系统总线相互通信。
请参阅图6,是本发明避障控制系统10较佳实施例的功能模块图。在本实施例中,所述的避障控制系统10可以被分割成确定模块01、计算模块02、控制模块03。本发明所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述所述避障控制系统10在所述机器人1中的执行过程。当处理器12执行避障控制系统10各模块的计算机程序指令段时,基于各个计算机程序指令段所能实现的操作和功能,可实现上述机器人避障方法的任一步骤。以下描述将具体介绍所述确定模块01、计算模块02、控制模块03所实现的操作和功能。
所述确定模块01,用于实时或者定时(例如,每隔2秒)获取机器人当前的定位数据(例如在室内的位置、姿态等),并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物。例如,可依靠机器人自身的传感器来定位并判断与预先确定的移动区域内各个障碍物的距离,如可在机器人上安装接近传感器(例如,超声波、红外、激光等传感器)来判断机器人当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物。
所述计算模块02,用于若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离。
在检测到机器人的当前定位位置与预先确定的移动区域内各个障碍物的距离之后,若判断没有障碍物离当前定位位置的距离小于预设距离,则继续沿目标位置路径移动并实时或者定时检测机器人与移动区域内各个障碍物的距离。若判断有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型计算出机器人与该障碍物的最短距离,以利用该最短距离来判断在三维空间中机器人沿目标位置路径移动时是否会碰撞到该障碍物,从而实现不仅能在机器人的传感器所在高度平面检测到障碍物,还能检测到三维空间中潜在的障碍物,以在机器人安装有传感器的方向和机器人没有安装传感器的其他方向上均能检测到三维空间中各个方向潜在的障碍物。其中,所述预先确定的机器人的3D模型及移动区域内各个障碍物的3D模型可以预先存储于机器人的存储单元中,或者,可以由机器人通过无线通信单元访问物联网系统服务器获取,在此不做限定。
所述控制模块03,用于根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开三维空间中各个方向潜在的障碍物,有效控制机器人在沿目标位置路径移动时的避障。
本实施例通过机器人当前的定位数据检测到有离当前定位位置的距离小于预设距离的障碍物时,根据机器人当前的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物在三维空间的最短距离,并计算出当前机器人应运动的方向,以控制机器人的运动姿态。由于能通过机器人与障碍物在三维空间的最短距离来控制机器人的运动方向,从而实现检测并避开机器人在三维空间中各个方向的障碍物,有效地控制机器人避障。
进一步地,上述计算模块02还用于:
对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理;对获取的定位数据、预处理后的机器人3D模型数据及预处理后的障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与障碍物的最短距离。
例如,由于机器人和障碍物一般为非凸体,因此,可对机器人和障碍物的3D模型进行预处理如转换为凸体,以便后续更加准确和快速地计算出最短距离。
进一步地,上述计算模块02还用于:针对机器人的每一个关节,直接利用预先确定的算法(例如,QuickHull快速凸包算法)找出包围各个关节的最小凸多面体,以将机器人 非凸模型转换为凸体模型。通过上述凸处理的机器人3D模型在后续计算最短距离向量时能有效提高计算速度和计算精度。
障碍物3D模型预处理的方式包括三种:第一种、构建非凸多面体的凸包围盒使之转换为凸体进行碰撞检测;第二种、对非凸多面体进行凸分解,使非凸模型转换为多个凸体进行碰撞检测;第三种、对障碍物3D模型扇形均分(即扇形剖分),然后对均分后的单个扇形进行凸分解,这种先扇形均分再凸分解的方式相对于前两种不仅计算速度更快,而且计算精度更高。
进一步地,上述计算模块02还用于:
建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分;该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
例如,在一种具体实施方式中,可包括如下步骤:
X1、建立要扇形均分的障碍物3D模型M的球形包围盒B,找到球形包围盒B的球心O,然后在球心O处建立三维坐标系Oxyz;
X2、过球心O做一条与三维坐标系Oxyz中z轴重合的直线L,则xoz平面即为初始扇形均分平面,设xoz平面为α1,α1将障碍物3D模型分为2部分;
X3、将α1绕直线L选择一定角度β(β代表相邻扇形偏角)得到另外一个新平面α2,将新平面继续旋转β可以得到平面α3,旋转m-1次可以得到第m个平面αn
X4、设β=180/m,则m个平面可以把球形包围盒B均分为2m部分,障碍物3D模型即被分为2m个模型部分。通过上述步骤可以完成对任意模型,包括非凸模型的简单剖分,并通过哈希表进行管理剖分好的模型部分。
进一步地,上述计算模块02还用于:
采用Delaunay三角剖分算法对障碍物3D模型进行表面三角剖分,产生三角面片(凸片)集合;并针对每一个三角面片构造与之对应的凸块。例如,将厚度为零的三角面片在其平面法向量方向进行预设厚度的拉伸,变为凸块。
进一步地,所述预先确定的距离计算规则包括:
根据机器人当前的定位数据(如室内位置、姿态等)及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;
对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法(例如,GJK算法)计算出机器人与筛选出的模型部分的最短距离,该最短距离即为机器人与障碍物3D模型的最短距离。
进一步地,如图2a、2b所示,所述预先确定的筛选算法包括:
Y1、将障碍物3D模型扇形均分后获得的n个模型部分分别作为障碍物的n个节点,建立key-value键值分别是相对于初始扇形均分平面(即xoz平面)的旋转角即偏角和模型几何信息数据的哈希表,以进行模型节点管理;
Y2、对扇形均分获得的各个模型部分进行标号,从1开始进行标号;均分的n个扇形模型部分,相邻扇形偏角为360°/n,根据标号,建立于初始标号为i的扇形模型部分的偏角映射关系,代表所述偏角映射关系的哈希函数为:
Hash(i)=i*(360°/n)
其中,i为标号为i的扇形模型部分,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;
Y3、建立机器人的运动学,根据建立的运动学计算出机器人各个关节的位姿,从建立的哈希表中查询出机器人附近的障碍物扇形区域。如下图2所示;当机器人在运动过程 中,通过机器人运动学,运动学方程为:
Ti=A0A1A2…Ai-1Ai
其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵(可以通过机器人各关节的D-H参数确定),A0表示机器人当前位置矩阵(与机器人当前定位数据对应),Ti为第i个关节相对于机器人坐标系的位姿;
通过Ti计算出机器人运动过程中各个关节局部坐标系原点坐标的实时更新值Qi(x,y,z),进一步可以得到关节在障碍物坐标系下的偏角α:
α=f(Qi(x,y,z))
其中,Qi(x,y,z)表示机器人关节在机器人坐标系下的坐标;Tr表示机器人坐标系变换到障碍物坐标系的变换矩阵(为4*4的矩阵,机器人坐标系和障碍物坐标系已确定,该矩阵可以直接计算出来),则机器人关节在障碍物坐标系下的坐标Qi(xt,yt,zt)为:
Qi(xt,yt,zt)=TrQi(x,y,z)
假定障碍物坐标系Z轴正向朝上,遵循右手坐标系,设关节在障碍物坐标系下的偏角为α,则
Figure PCTCN2017091368-appb-000017
向求解三角函数即可得到关节在障碍物坐标系下的偏角为α,获取到偏角α之后,即可根据代表所述偏角映射关系的哈希函数Hash(i)计算得到对应标号的扇形模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。例如,计算得到的扇形模型部分的标号为k,则可选取标号在[k-M,k+N]范围内的扇形模型部分进行最短距离计算。其中M、N为一预设数值,以选取标号为k的扇形模型部分附近的多个扇形模型部分作为待进行最短距离计算的模型部分。
如图3a所示,在一种具体实施方式中,机器人采用只有底盘的运动、没有手臂等其他运动关节的机器人,机器人3D模型采用高为1500mm,运动底盘半径为320mm的机器人3D模型,障碍物3D模型采用一个简单立方体模型,尺寸为2200mm*2200mm*1000mm,在障碍物模型坐标系下机器人当前的坐标为(1800,-100)。
对障碍物模型进行预处理中,预处理主要是对障碍物模型进行扇形均分,如图3b所示,障碍物模型被扇形均分为32份,从X轴逆时针对扇形均分的模型部分进行编号,1,2,…,15,16,…,31,32;每一个模型块的夹角为:360/32=11.25度,可以看出,编号1模型块与X轴正向偏角11.25度,编号2模型块与X轴正向偏角11.25*2=22.5度,编号为i的模型块与X轴正向偏角:i*(360/32)。
在对模型部分的筛选过程中,因本实施例采用的机器人只有底盘的运动,没有手臂等其他运动关节,所以底盘位姿代表机器人的整体位姿,当前机器人的位置为(1800,-100)(相对于障碍物坐标系下坐标),可以计算出机器人与障碍物坐标系的X轴正轴的偏角为354度;进而计算机器人对应扇形模型部分的标号为354/11.25=31.5,向上取整得到32,所以待进行距离计算的对应扇形块编号32,也就是说机器人离编号为32的障碍物块最近。
接下来选取K=32附近的障碍物块,采用GJK计算与机器人之间的最短距离及最短距离点;选取M=1,N=2,则得到障碍物块范围是[31,34],编号超过32的需要做简单转换,33转换为对应编号为1的障碍物块,34转换为对应编号为2的障碍物块;如图3c所示,最终选取编号是31,32,1,2的障碍物块进行最短距离计算。
在计算最短距离时,通过上述处理,已缩小障碍物块的范围(1,2,31,32),直接采用GJK算法计算出机器人与障碍物间的最短距离点,如图3d所示,分别为障碍物上的点(x1,y1,z1)=(1100,-100,-235),机器人上的点(x2,y2,z2)=(1477,-100,-235);则机器人与障碍物之间的最短距离向量d=(x2-x1,y2-y1,z2-z1)=(377,0,0)。
进一步地,上述控制模块03还用于:
根据计算的最短距离分析是否需要避障;如若计算的最短距离大于预设距离阈值,则确定不需要避障,或者,若计算的最短距离小于或者等于预设距离阈值,则确定需要避障。若确定需要避障,则根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,并根据计算出的运动方向控制机器人的运动姿态。
进一步地,上述控制模块03还用于:
将机器人及该障碍物投影到同一坐标系平面中;
根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物的投影相对于机器人当前位置及目标位置有效遮挡区域的面积;
根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数(例如,虚拟斥力),根据目标点位置与机器人当前定位位置的距离确定出第二预设类型避障参数(例如,虚拟引力),根据第一预设类型避障参数及第二预设类型避障参数确定出机器人当前应运动的方向。
进一步地,如图4所示,所述预先确定的投影分析规则为:
设坐标系平面的P1位置点表示机器人所在位置即当前定位位置,P2位置点表示目标点所在位置即目标位置,投影区域P3表示障碍物3D模型在坐标系平面中的投影,并在坐标系平面中连接P1P2,得到一条直线J;
若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
若直线J与投影区域P3的交点个数大于1,则直线J将投影分割为两部分(如图4所示的S1区域和S2区域),在投影区域P3(例如S1或S2区域中)中任意找一点PS,过PS作直线J的垂线,垂线与直线J的交点为PJ,进而得到向量
Figure PCTCN2017091368-appb-000018
计算最短距离的向量
Figure PCTCN2017091368-appb-000019
与向量
Figure PCTCN2017091368-appb-000020
的夹角θ,若θ是锐角,则确定PS点所在区域为有效遮挡区域(例如,图4中有效遮挡投影区域S2),或者,若θ不是锐角,则确定PS点所在区域不是有效遮挡区域。
进一步地,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,上述控制模块03还用于:
对计算的最短距离和有效遮挡投影区域的面积,利用第一计算规则计算出作用在机器人上的一个虚拟斥力;
对当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的一个虚拟引力;
计算出该虚拟引力和虚拟斥力的合力方向,所述合力方向即为机器人当前应运动的方向。
进一步地,所述第一计算规则为:
Figure PCTCN2017091368-appb-000021
其中,
Figure PCTCN2017091368-appb-000022
为机器人与障碍物的最短距离的向量,S为有效遮挡区域的面积,
Figure PCTCN2017091368-appb-000023
为障碍物对机器人的虚拟斥力,
Figure PCTCN2017091368-appb-000024
表示最短距离向量和有效遮挡区域的面积转换为障碍物对机器人虚拟排斥力的关系式。该关系式的实现可以有多种方法,在一种可选的实施例中,该关系式为:
Figure PCTCN2017091368-appb-000025
其中,kr br表示预设的虚拟斥力系数,s0表示预设的有效遮挡区域面积阈值,s0>0;d0表示预设的距离阈值,d0>0;虚拟斥力方向(即
Figure PCTCN2017091368-appb-000026
方向)与最短距离方向相同。
依据上述关系式,当机器人与障碍物距离较远,超过设定的距离阈值d0时不进行避障,
Figure PCTCN2017091368-appb-000027
的大小为0;进入避障距离范围内(最短距离小于d0),当有效遮挡区域的面积s比较大,超过设定的值s0时,
Figure PCTCN2017091368-appb-000028
会使
Figure PCTCN2017091368-appb-000029
变大,距离较远时就可以进行避障,提前避障,以绕开较大的障碍物。
进一步地,所述第二计算规则为:
Ft=kt*dt
其中,
Figure PCTCN2017091368-appb-000030
为目标位置对机器人的虚拟引力,其中kt表示预设的引力系数,dt表示目标位置与机器人当前定位位置的距离,虚拟斥力方向(即
Figure PCTCN2017091368-appb-000031
方向)朝向目标位置。
进一步地,本发明还提供了一种计算机可读存储介质。
在本实施例中,该计算机可读存储介质上存储有机器人的避障控制系统,该机器人的避障控制系统可被至少一处理器执行,以实现以下操作:
步骤S10、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
步骤S20、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
步骤S30、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
进一步地,所述步骤S20包括:
步骤S201、对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理。
步骤S202、对获取的定位数据、预处理后的机器人3D模型数据及预处理后的障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与障碍物的最短距离。
进一步地,所述步骤S201中的机器人3D模型预处理包括:针对机器人的每一个关节,直接利用预先确定的算法(例如,QuickHull快速凸包算法)找出包围各个关节的最小凸多面体,以将机器人非凸模型转换为凸体模型。通过上述凸处理的机器人3D模型在后续计算最短距离向量时能有效提高计算速度和计算精度。
障碍物3D模型预处理的方式包括三种:第一种、构建非凸多面体的凸包围盒使之转换为凸体进行碰撞检测;第二种、对非凸多面体进行凸分解,使非凸模型转换为多个凸体进行碰撞检测;第三种、对障碍物3D模型扇形均分(即扇形剖分),然后对均分后的单个扇形进行凸分解,这种先扇形均分再凸分解的方式相对于前两种不仅计算速度更快,而且计算精度更高。
进一步地,所述对障碍物3D模型扇形均分的步骤包括:
建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;
设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
进一步地,所述对均分后的单个扇形进行凸分解的步骤包括:
采用Delaunay三角剖分算法对障碍物3D模型进行表面三角剖分,产生三角面片(凸片)集合;并针对每一个三角面片构造与之对应的凸块。例如,将厚度为零的三角面片在其平面法向量方向进行预设厚度的拉伸,变为凸块。
进一步地,所述预先确定的距离计算规则包括:
根据机器人当前的定位数据(如室内位置、姿态等)及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;
对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法(例如,GJK算法)计算出机器人与筛选出的模型部分的最短距离,该最短距离即为机器人与障碍物3D模型的最短距离。
进一步地,所述步骤S30包括:
机器人的避障控制系统根据计算的最短距离分析是否需要避障;如若计算的最短距离大于预设距离阈值,则确定不需要避障,或者,若计算的最短距离小于或者等于预设距离阈值,则确定需要避障。若确定需要避障,则机器人的避障控制系统根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,并根据计算出的运动方向控制机器人的运动姿态。
进一步地,所述根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向的步骤包括:
将机器人及该障碍物投影到同一坐标系平面中;
根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物的投影相对于机器人当前位置及目标位置有效遮挡区域的面积;
根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数(例如,虚拟斥力),根据目标点位置与机器人当前定位位置的距离确定出第二预设类型避障参数(例如,虚拟引力),根据第一预设类型避障参数及第二预设类型避障参数确定出机器人当前应运动的方向。
进一步地,所述预先确定的投影分析规则为:
设坐标系平面的P1位置点表示机器人所在位置即当前定位位置,P2位置点表示目标点所在位置即目标位置,投影区域P3表示障碍物3D模型在坐标系平面中的投影,并在坐标系平面中连接P1P2,得到一条直线J;
若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
若直线J与投影区域P3的交点个数大于1,则直线J将投影分割为两部分(如图4所示的S1区域和S2区域),在投影区域P3(例如S1或S2区域中)中任意找一点PS,过PS作直线J的垂线,垂线与直线J的交点为PJ,进而得到向量
Figure PCTCN2017091368-appb-000032
计算最短距离的向量
Figure PCTCN2017091368-appb-000033
与向量
Figure PCTCN2017091368-appb-000034
的夹角θ,若θ是锐角,则确定PS点所在区域为有效遮挡区域(例如,图4中有效遮挡投影区域S2),或者,若θ不是锐角,则确定PS点所在区域不是有效遮挡区域。
进一步地,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向的步骤包括:
对计算的最短距离和有效遮挡投影区域的面积,利用第一计算规则计算出作用在机器人上的一个虚拟斥力;
对当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的一个虚拟引力;
计算出该虚拟引力和虚拟斥力的合力方向,所述合力方向即为机器人当前应运动的方向。
进一步地,进一步地,所述第一计算规则为:
Figure PCTCN2017091368-appb-000035
其中,
Figure PCTCN2017091368-appb-000036
为机器人与障碍物的最短距离的向量,S为有效遮挡区域的面积,
Figure PCTCN2017091368-appb-000037
为障碍物对机器人的虚拟斥力,
Figure PCTCN2017091368-appb-000038
表示最短距离向量和有效遮挡区域的面积转换为障碍物对机器人虚拟排斥力的关系式。该关系式的实现可以有多种方法,在一种可选的实施例中,该关系式为:
Figure PCTCN2017091368-appb-000039
其中,kr br表示预设的虚拟斥力系数,s0表示预设的有效遮挡区域面积阈值,s0>0;d0表示预设的距离阈值,d0>0;虚拟斥力方向(即
Figure PCTCN2017091368-appb-000040
方向)与最短距离方向相同。
依据上述关系式,当机器人与障碍物距离较远,超过设定的距离阈值d0时不进行避障,
Figure PCTCN2017091368-appb-000041
的大小为0;进入避障距离范围内(最短距离小于d0),当有效遮挡区域的面积s比较大,超过设定的值s0时,
Figure PCTCN2017091368-appb-000042
会使
Figure PCTCN2017091368-appb-000043
变大,距离较远时就可以进行避障,提前避障,以绕开较大的障碍物。
进一步地,所述第二计算规则为:
Ft=kt*dt
其中,
Figure PCTCN2017091368-appb-000044
为目标位置对机器人的虚拟引力,其中kt表示预设的引力系数,dt表示目标位置与机器人当前定位位置的距离,虚拟斥力方向(即
Figure PCTCN2017091368-appb-000045
方向)朝向目标位置。
本发明之计算机可读存储介质的具体实施方式与上述机器人避障方法的实施例大致相同,故不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件来实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上参照附图说明了本发明的优选实施例,并非因此局限本发明的权利范围。上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。另外,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本领域技术人员不脱离本发明的范围和实质,可以有多种变型方案实现本发明,比如作为一个实施例的特征可用于另一实施例而得到又一实施例。凡在运用本发明的技术构思之内所作的任何修改、等同替换和改进,均应在本发明的权利范围之内。

Claims (20)

  1. 一种机器人的避障控制系统,其特征在于,所述避障控制系统包括:
    确定模块,用于实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
    计算模块,用于若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
    控制模块,用于根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
  2. 如权利要求1所述的机器人的避障控制系统,其特征在于,所述计算模块还用于:
    对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理,针对机器人的每一个关节,利用预先确定的算法找出包围各个关节的最小凸多面体,以将机器人的非凸模型转换为凸体模型;
    对该障碍物3D模型扇形均分,并对均分后的单个扇形进行凸分解;
    对获取的定位数据、预处理后的机器人3D模型数据及预处理后的该障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与该障碍物的最短距离。
  3. 如权利要求2所述的机器人的避障控制系统,其特征在于,所述计算模块还用于:
    建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;
    设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
  4. 如权利要求2所述的机器人的避障控制系统,其特征在于,所述预先确定的距离计算规则包括:
    根据机器人当前的定位数据及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;
    对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法计算出机器人与筛选出的模型部分的最短距离,该最短距离为机器人与障碍物的最短距离;
    所述预先确定的筛选算法包括:
    将障碍物3D模型的n个模型部分分别作为障碍物的n个节点,建立键值为相对于初始扇形均分平面的偏角的哈希表,以进行模型节点管理;
    对各个模型部分进行标号,根据标号,建立标号为i的模型部分的偏角映射关系,定义所述偏角映射关系的哈希函数为:
    Hash(i)=i*(360°/n)
    其中,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;
    建立机器人的运动学方程,根据建立的运动学方程计算出机器人各个关节的位姿,该运动学方程为:
    Ti=A0A1A2…Ai-1Ai
    其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵,A0为机器人当前位置矩阵,Ti为第i个关节相对于机器人坐标系的位姿;
    通过Ti计算出机器人运动过程中各个关节在机器人坐标系下的坐标Qi(x,y,z),并计算出机器人坐标系变换到障碍物坐标系的变换矩阵Tr,则机器人关节在障碍物坐标系下的坐 标Qi(xt,yt,zt)为:
    Qi(xt,yt,zt)=TrQi(x,y,z)
    通过如下公式得到关节在障碍物坐标系下的偏角α:
    Figure PCTCN2017091368-appb-100001
    根据偏角α及哈希函数Hash(i)计算得到对应标号的模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。
  5. 如权利要求1所述的机器人的避障控制系统,其特征在于,所述控制模块还用于:
    将机器人及该障碍物3D模型投影到同一坐标系平面中;
    根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物3D模型的投影相对于机器人的当前定位位置及目标位置有效遮挡区域的面积;
    根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向。
  6. 如权利要求5所述的机器人的避障控制系统,其特征在于,所述预先确定的投影分析规则为:
    设定坐标系平面的P1位置点为机器人的当前定位位置,P2位置点为目标位置,投影区域P3为障碍物3D模型在所述坐标系平面中的投影,并在所述坐标系平面中连接P1P2,得到直线J;
    若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
    若直线J与投影区域P3的交点个数大于1,则直线J将投影分割为两部分;在投影区域P3中任意找一点PS,过PS作直线J的垂线,设定垂线与直线J的交点为PJ,得到向量
    Figure PCTCN2017091368-appb-100002
    计算最短距离的向量
    Figure PCTCN2017091368-appb-100003
    与向量
    Figure PCTCN2017091368-appb-100004
    的夹角θ;若判断夹角θ是锐角,则确定PS点所在区域是有效遮挡区域;若判断夹角θ不是锐角,则确定PS点所在区域不是有效遮挡区域。
  7. 如权利要求5所述的机器人的避障控制系统,其特征在于,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述控制模块还用于:
    根据计算的最短距离和有效遮挡区域的面积,利用第一计算规则计算出作用在机器人上的虚拟斥力;
    根据当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的虚拟引力;
    计算出该虚拟引力和虚拟斥力的合力方向作为机器人当前应运动的方向。
  8. 如权利要求7所述的机器人的避障控制系统,其特征在于,所述第一计算规则为:
    设定机器人与障碍物的最短距离的向量为
    Figure PCTCN2017091368-appb-100005
    有效遮挡区域的面积为S,障碍物对机器人的虚拟斥力为
    Figure PCTCN2017091368-appb-100006
    则计算公式如下:
    Figure PCTCN2017091368-appb-100007
    其中,
    Figure PCTCN2017091368-appb-100008
    为机器人与障碍物的最短距离的向量,S为有效遮挡区域的面积,
    Figure PCTCN2017091368-appb-100009
    为障碍物对机器人的虚拟斥力,kr br为预设的虚拟斥力系数,s0为预设的有效遮挡区域面积阈值,d0为预设的距离阈值,
    Figure PCTCN2017091368-appb-100010
    方向与最短距离方向相同。
  9. 如权利要求7所述的机器人的避障控制系统,其特征在于,所述第二计算规则为:
    Ft=kt*dt
    其中,
    Figure PCTCN2017091368-appb-100011
    为目标位置对机器人的虚拟引力,kt为预设的引力系数,dt为目标位置与机器人当前定位位置的距离,
    Figure PCTCN2017091368-appb-100012
    方向朝向目标位置。
  10. 一种机器人避障方法,其特征在于,所述方法包括以下步骤:
    A1、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;
    A2、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;
    A3、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
  11. 如权利要求10所述的机器人避障方法,其特征在于,所述A2步骤包括:
    对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理,针对机器人的每一个关节,利用预先确定的算法找出包围各个关节的最小凸多面体,以将机器人的非凸模型转换为凸体模型;
    对该障碍物3D模型扇形均分,并对均分后的单个扇形进行凸分解;
    对获取的定位数据、预处理后的机器人3D模型数据及预处理后的该障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与该障碍物的最短距离。
  12. 如权利要求11所述的机器人避障方法,其特征在于,所述A2步骤还包括:
    建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;
    设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
  13. 如权利要求11所述的机器人避障方法,其特征在于,所述预先确定的距离计算规则包括:
    根据机器人当前的定位数据及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;
    对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法计算出机器人与筛选出的模型部分的最短距离,该最短距离为机器人与障碍物的最短距离;
    所述预先确定的筛选算法包括:
    将障碍物3D模型的n个模型部分分别作为障碍物的n个节点,建立键值为相对于初始扇形均分平面的偏角的哈希表,以进行模型节点管理;
    对各个模型部分进行标号,根据标号,建立标号为i的模型部分的偏角映射关系,定义所述偏角映射关系的哈希函数为:
    Hash(i)=i*(360°/n)
    其中,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;
    建立机器人的运动学方程,根据建立的运动学方程计算出机器人各个关节的位姿,该 运动学方程为:
    Ti=A0A1A2…Ai-1Ai
    其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵,A0为机器人当前位置矩阵,Ti为第i个关节相对于机器人坐标系的位姿;
    通过Ti计算出机器人运动过程中各个关节在机器人坐标系下的坐标Qi(x,y,z),并计算出机器人坐标系变换到障碍物坐标系的变换矩阵Tr,则机器人关节在障碍物坐标系下的坐标Qi(xt,yt,zt)为:
    Qi(xt,yt,zt)=TrQi(x,y,z)
    通过如下公式得到关节在障碍物坐标系下的偏角α:
    Figure PCTCN2017091368-appb-100013
    根据偏角α及哈希函数Hash(i)计算得到对应标号的模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。
  14. 如权利要求10所述的机器人避障方法,其特征在于,所述A3步骤包括:
    将机器人及该障碍物3D模型投影到同一坐标系平面中;
    根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物3D模型的投影相对于机器人的当前定位位置及目标位置有效遮挡区域的面积;
    根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向。
  15. 如权利要求14所述的机器人避障方法,其特征在于,所述预先确定的投影分析规则为:
    设定坐标系平面的P1位置点为机器人的当前定位位置,P2位置点为目标位置,投影区域P3为障碍物3D模型在所述坐标系平面中的投影,并在所述坐标系平面中连接P1P2,得到直线J;
    若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
    若直线J与投影区域P3的交点个数大于1,则直线J将投影分割为两部分;在投影区域P3中任意找一点PS,过PS作直线J的垂线,设定垂线与直线J的交点为PJ,得到向量
    Figure PCTCN2017091368-appb-100014
    计算最短距离的向量
    Figure PCTCN2017091368-appb-100015
    与向量
    Figure PCTCN2017091368-appb-100016
    的夹角θ;若判断夹角θ是锐角,则确定PS点所在区域是有效遮挡区域;若判断夹角θ不是锐角,则确定PS点所在区域不是有效遮挡区域。
  16. 如权利要求14所述的机器人避障方法,其特征在于,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述A3步骤还包括:
    根据计算的最短距离和有效遮挡区域的面积,利用第一计算规则计算出作用在机器人上的虚拟斥力;
    根据当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的虚拟引力;
    计算出该虚拟引力和虚拟斥力的合力方向作为机器人当前应运动的方向。
  17. 如权利要求16所述的机器人避障方法,其特征在于,所述第一计算规则为:
    设定机器人与障碍物的最短距离的向量为
    Figure PCTCN2017091368-appb-100017
    有效遮挡区域的面积为S,障碍物对机器人的虚拟斥力为
    Figure PCTCN2017091368-appb-100018
    则计算公式如下:
    Figure PCTCN2017091368-appb-100019
    其中,
    Figure PCTCN2017091368-appb-100020
    为机器人与障碍物的最短距离的向量,S为有效遮挡区域的面积,
    Figure PCTCN2017091368-appb-100021
    为障碍物对机器人的虚拟斥力,kr br为预设的虚拟斥力系数,s0为预设的有效遮挡区域面积阈值,d0为预设的距离阈值,虚拟斥力的方向与最短距离方向相同。
  18. 如权利要求16所述的机器人避障方法,其特征在于,所述第二计算规则为:
    Ft=kt*dt
    其中,
    Figure PCTCN2017091368-appb-100022
    为目标位置对机器人的虚拟引力,kt为预设的引力系数,dt为目标位置与机器人当前定位位置的距离,
    Figure PCTCN2017091368-appb-100023
    方向朝向目标位置。
  19. 一种机器人,其特征在于,该机器人包括处理器及存储器,该存储器上存储有机器人的避障控制系统,该机器人的避障控制系统可被该处理器执行,以实现如权利要求10至18中所述机器人避障方法中的任一步骤。
  20. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有机器人的避障控制系统,该机器人的避障控制系统可被至少一处理器执行,以实现以下操作以实现如权利要求10至18中所述机器人避障方法中的任一步骤。
PCT/CN2017/091368 2017-03-27 2017-06-30 机器人的避障控制系统、方法、机器人及存储介质 WO2018176668A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2018533757A JP6716178B2 (ja) 2017-03-27 2017-06-30 ロボットの障害物回避制御システム、方法、ロボット及びプログラム
EP17897209.7A EP3410246B1 (en) 2017-03-27 2017-06-30 Robot obstacle avoidance control system and method, robot, and storage medium
US16/084,231 US11059174B2 (en) 2017-03-27 2017-06-30 System and method of controlling obstacle avoidance of robot, robot and storage medium
KR1020187018065A KR102170928B1 (ko) 2017-03-27 2017-06-30 로봇의 장애물 회피 제어 시스템, 방법, 로봇 및 저장매체
SG11201809892QA SG11201809892QA (en) 2017-03-27 2017-06-30 System and method of controlling obstacle avoidance of robot, robot and storage medium
AU2017404562A AU2017404562B2 (en) 2017-03-27 2017-06-30 System and method of controlling obstacle avoidance of robot, robot and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710186581.6 2017-03-27
CN201710186581.6A CN107688342B (zh) 2017-03-27 2017-03-27 机器人的避障控制系统及方法

Publications (1)

Publication Number Publication Date
WO2018176668A1 true WO2018176668A1 (zh) 2018-10-04

Family

ID=61152364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091368 WO2018176668A1 (zh) 2017-03-27 2017-06-30 机器人的避障控制系统、方法、机器人及存储介质

Country Status (9)

Country Link
US (1) US11059174B2 (zh)
EP (1) EP3410246B1 (zh)
JP (1) JP6716178B2 (zh)
KR (1) KR102170928B1 (zh)
CN (1) CN107688342B (zh)
AU (1) AU2017404562B2 (zh)
SG (1) SG11201809892QA (zh)
TW (1) TWI662388B (zh)
WO (1) WO2018176668A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046743A (zh) * 2019-11-21 2020-04-21 新奇点企业管理集团有限公司 一种障碍物信息标注方法、装置、电子设备和存储介质
CN111207750A (zh) * 2019-12-31 2020-05-29 合肥赛为智能有限公司 一种机器人动态路径规划方法
CN112148013A (zh) * 2020-09-25 2020-12-29 深圳优地科技有限公司 机器人避障方法、机器人及存储介质
CN112704878A (zh) * 2020-12-31 2021-04-27 深圳市其乐游戏科技有限公司 集群游戏中的单位位置调整方法、系统、设备及存储介质
CN113110594A (zh) * 2021-05-08 2021-07-13 北京三快在线科技有限公司 控制无人机避障的方法、装置、存储介质及无人机
CN113282984A (zh) * 2021-05-21 2021-08-20 长安大学 一种公共场所人员应急疏散模拟方法
CN114859914A (zh) * 2022-05-09 2022-08-05 广东利元亨智能装备股份有限公司 障碍物检测方法、装置、设备及存储介质
CN115890676A (zh) * 2022-11-28 2023-04-04 深圳优地科技有限公司 机器人控制方法、机器人及存储介质

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717302B (zh) * 2018-05-14 2021-06-25 平安科技(深圳)有限公司 机器人跟随人物方法、装置及存储介质、机器人
CN109461185B (zh) * 2018-09-10 2021-08-17 西北工业大学 一种适用于复杂场景的机器人目标主动避障方法
CN109465835A (zh) * 2018-09-25 2019-03-15 华中科技大学 一种动态环境下双臂服务机器人作业的事前安全预测方法
TWI721324B (zh) * 2018-10-10 2021-03-11 鴻海精密工業股份有限公司 電子裝置及立體物體的判斷方法
CN109284574B (zh) * 2018-10-25 2022-12-09 西安科技大学 一种串联桁架结构体系非概率可靠性分析方法
KR102190101B1 (ko) * 2019-03-08 2020-12-11 (주)아이로텍 로봇의 장애물 충돌 회피를 위한 경로 안내툴 표시 방법 및 이를 이용한 충돌 회피 시뮬레이션 시스템
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
CN110390829A (zh) * 2019-07-30 2019-10-29 北京百度网讯科技有限公司 交通信号灯识别的方法及装置
CN112445209A (zh) * 2019-08-15 2021-03-05 纳恩博(北京)科技有限公司 机器人的控制方法和机器人、存储介质及电子装置
CN111168675B (zh) * 2020-01-08 2021-09-03 北京航空航天大学 一种家用服务机器人的机械臂动态避障运动规划方法
CN111449666B (zh) * 2020-03-09 2023-07-04 北京东软医疗设备有限公司 距离监测方法、装置、血管机、电子设备及存储介质
JP2021146905A (ja) * 2020-03-19 2021-09-27 本田技研工業株式会社 制御装置、制御方法およびプログラム
CN111571582B (zh) * 2020-04-02 2023-02-28 上海钧控机器人有限公司 一种艾灸机器人人机安全监控系统及监控方法
CN111427355B (zh) * 2020-04-13 2023-05-02 京东科技信息技术有限公司 障碍物数据处理方法、装置、设备及存储介质
JP7447670B2 (ja) * 2020-05-15 2024-03-12 トヨタ自動車株式会社 自律移動装置制御システム、その制御方法及びその制御プログラム
US11292132B2 (en) * 2020-05-26 2022-04-05 Edda Technology, Inc. Robot path planning method with static and dynamic collision avoidance in an uncertain environment
CN111857126A (zh) * 2020-05-29 2020-10-30 深圳市银星智能科技股份有限公司 一种机器人避障方法、机器人以及存储介质
CN111958590B (zh) * 2020-07-20 2021-09-28 佛山科学技术学院 一种复杂三维环境中机械臂防碰撞方法及系统
CN112415532B (zh) * 2020-11-30 2022-10-21 上海炬佑智能科技有限公司 灰尘检测方法、距离检测装置以及电子设备
CN112991527B (zh) * 2021-02-08 2022-04-19 追觅创新科技(苏州)有限公司 目标对象的躲避方法及装置、存储介质、电子装置
CN113119109A (zh) * 2021-03-16 2021-07-16 上海交通大学 基于伪距离函数的工业机器人路径规划方法和系统
CN113459090A (zh) * 2021-06-15 2021-10-01 中国农业大学 码垛机器人的智能避障方法、电子设备及介质
US11753045B2 (en) * 2021-06-22 2023-09-12 Waymo Llc Modeling positional uncertainty of moving objects using precomputed polygons
CN113601497B (zh) * 2021-07-09 2024-02-06 广东博智林机器人有限公司 一种方法、装置、机器人及存储介质
CN113752265B (zh) * 2021-10-13 2024-01-05 国网山西省电力公司超高压变电分公司 一种机械臂避障路径规划方法、系统及装置
KR102563074B1 (ko) * 2021-10-20 2023-08-02 금오공과대학교 산학협력단 차동 구동형 이동로봇의 운동역학을 고려한 장애물 회피 및 경로 추종방법
CN114035569B (zh) * 2021-11-09 2023-06-27 中国民航大学 一种航站楼载人机器人路径拓展通行方法
CN114161047B (zh) * 2021-12-23 2022-11-18 南京衍构科技有限公司 一种用于增材制造的焊枪头自动避障方法
CN114355914B (zh) * 2021-12-27 2022-07-01 盐城工学院 用于无人船的自主巡航系统及控制方法
CN114227694B (zh) * 2022-01-10 2024-05-03 珠海一微半导体股份有限公司 一种基于地插检测的机器人控制方法、芯片及机器人
CN114859904B (zh) * 2022-04-24 2023-04-07 汕头大学 一种基于e-grn的集群围捕方法、执行装置和系统
CN115202350B (zh) * 2022-07-15 2023-06-09 盐城工学院 一种agv小车的自动运输系统
CN115437388B (zh) * 2022-11-09 2023-01-24 成都朴为科技有限公司 一种全向移动机器人脱困方法和装置
CN115507857B (zh) * 2022-11-23 2023-03-14 常州唯实智能物联创新中心有限公司 高效机器人运动路径规划方法及系统
CN116755562B (zh) * 2023-07-04 2024-04-05 深圳市仙瞬科技有限公司 一种避障方法、装置、介质及ar/vr设备
CN117093005B (zh) * 2023-10-16 2024-01-30 华东交通大学 一种智能汽车自主避障方法
CN117207202B (zh) * 2023-11-09 2024-04-02 国网山东省电力公司东营供电公司 带电作业机器人防碰撞约束控制方法、系统、终端及介质
CN117406758B (zh) * 2023-12-14 2024-03-12 双擎科技(杭州)有限公司 一种机器人避障装置及机器人智能防碰系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101512453A (zh) * 2006-09-14 2009-08-19 Abb研究有限公司 避免工业机器人与物体之间碰撞的方法和设备
US7734387B1 (en) * 2006-03-31 2010-06-08 Rockwell Collins, Inc. Motion planner for unmanned ground vehicles traversing at high speeds in partially known environments
US20160111006A1 (en) * 2014-05-20 2016-04-21 Verizon Patent And Licensing Inc. User interfaces for selecting unmanned aerial vehicles and mission plans for unmanned aerial vehicles
CN106227218A (zh) * 2016-09-27 2016-12-14 深圳乐行天下科技有限公司 一种智能移动设备的导航避障方法及装置
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7164117B2 (en) * 1992-05-05 2007-01-16 Automotive Technologies International, Inc. Vehicular restraint system control system and method using multiple optical imagers
JP3425760B2 (ja) * 1999-01-07 2003-07-14 富士通株式会社 干渉チェック装置
US8840838B2 (en) * 2011-09-25 2014-09-23 Theranos, Inc. Centrifuge configurations
JP2014018912A (ja) 2012-07-18 2014-02-03 Seiko Epson Corp ロボット制御装置、ロボット制御方法およびロボット制御プログラムならびにロボットシステム
JP2014056506A (ja) * 2012-09-13 2014-03-27 Toyota Central R&D Labs Inc 障害物検出装置及びそれを備えた移動体
US9227323B1 (en) 2013-03-15 2016-01-05 Google Inc. Methods and systems for recognizing machine-readable information on three-dimensional objects
US20150202770A1 (en) * 2014-01-17 2015-07-23 Anthony Patron Sidewalk messaging of an autonomous robot
TWI555524B (zh) 2014-04-30 2016-11-01 國立交通大學 機器人的行動輔助系統
KR101664575B1 (ko) * 2014-11-07 2016-10-10 재단법인대구경북과학기술원 모바일 로봇의 장애물 회피 시스템 및 방법
US10586464B2 (en) * 2015-07-29 2020-03-10 Warren F. LeBlanc Unmanned aerial vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7734387B1 (en) * 2006-03-31 2010-06-08 Rockwell Collins, Inc. Motion planner for unmanned ground vehicles traversing at high speeds in partially known environments
CN101512453A (zh) * 2006-09-14 2009-08-19 Abb研究有限公司 避免工业机器人与物体之间碰撞的方法和设备
US20160111006A1 (en) * 2014-05-20 2016-04-21 Verizon Patent And Licensing Inc. User interfaces for selecting unmanned aerial vehicles and mission plans for unmanned aerial vehicles
CN106227218A (zh) * 2016-09-27 2016-12-14 深圳乐行天下科技有限公司 一种智能移动设备的导航避障方法及装置
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046743A (zh) * 2019-11-21 2020-04-21 新奇点企业管理集团有限公司 一种障碍物信息标注方法、装置、电子设备和存储介质
CN111046743B (zh) * 2019-11-21 2023-05-05 新奇点智能科技集团有限公司 一种障碍物信息标注方法、装置、电子设备和存储介质
CN111207750A (zh) * 2019-12-31 2020-05-29 合肥赛为智能有限公司 一种机器人动态路径规划方法
CN112148013A (zh) * 2020-09-25 2020-12-29 深圳优地科技有限公司 机器人避障方法、机器人及存储介质
CN112704878A (zh) * 2020-12-31 2021-04-27 深圳市其乐游戏科技有限公司 集群游戏中的单位位置调整方法、系统、设备及存储介质
CN113110594A (zh) * 2021-05-08 2021-07-13 北京三快在线科技有限公司 控制无人机避障的方法、装置、存储介质及无人机
CN113282984A (zh) * 2021-05-21 2021-08-20 长安大学 一种公共场所人员应急疏散模拟方法
CN114859914A (zh) * 2022-05-09 2022-08-05 广东利元亨智能装备股份有限公司 障碍物检测方法、装置、设备及存储介质
CN115890676A (zh) * 2022-11-28 2023-04-04 深圳优地科技有限公司 机器人控制方法、机器人及存储介质

Also Published As

Publication number Publication date
SG11201809892QA (en) 2018-12-28
JP6716178B2 (ja) 2020-07-01
CN107688342B (zh) 2019-05-10
EP3410246B1 (en) 2021-06-23
AU2017404562B2 (en) 2020-01-30
US20210078173A1 (en) 2021-03-18
US11059174B2 (en) 2021-07-13
KR20190022435A (ko) 2019-03-06
EP3410246A4 (en) 2019-11-06
TWI662388B (zh) 2019-06-11
CN107688342A (zh) 2018-02-13
TW201835703A (zh) 2018-10-01
JP2019516146A (ja) 2019-06-13
AU2017404562A1 (en) 2018-10-11
KR102170928B1 (ko) 2020-10-29
EP3410246A1 (en) 2018-12-05

Similar Documents

Publication Publication Date Title
WO2018176668A1 (zh) 机器人的避障控制系统、方法、机器人及存储介质
AU2018271237B2 (en) Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot
JP6105092B2 (ja) 光学式文字認識を用いて拡張現実を提供する方法と装置
US20210049360A1 (en) CONTROLLER GESTURES IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
Wang et al. Robot manipulator self-identification for surrounding obstacle detection
CN110319834B (zh) 一种室内机器人定位的方法及机器人
CN110874100A (zh) 用于使用视觉稀疏地图进行自主导航的系统和方法
US11703334B2 (en) Mobile robots to generate reference maps for localization
US10102629B1 (en) Defining and/or applying a planar model for object detection and/or pose estimation
Krajník et al. External localization system for mobile robotics
KR20220076524A (ko) 증강 현실 효과들의 동적 점진적 열화
US9245366B1 (en) Label placement for complex geographic polygons
US11704881B2 (en) Computer systems and methods for navigating building information models in an augmented environment
JP2021531524A (ja) 3次元仮想空間モデルを利用したユーザポーズ推定方法および装置
JP7197550B2 (ja) ビジュアルローカリゼーションとオドメトリに基づく経路追跡方法およびシステム
WO2019183928A1 (zh) 一种室内机器人定位的方法及机器人
US20220237875A1 (en) Methods and apparatus for adaptive augmented reality anchor generation
US11620846B2 (en) Data processing method for multi-sensor fusion, positioning apparatus and virtual reality device
EP3295291B1 (en) Drawing object inferring system and method
US20220129006A1 (en) Spatial blind spot monitoring systems and related methods of use
Meng et al. Robust 3D Indoor Map Building via RGB-D SLAM with Adaptive IMU Fusion on Robot
Jabalameli et al. Near Real-Time Robotic Grasping of Novel Objects in Cluttered Scenes
CN116848493A (zh) 使用极线约束的光束平差
WO2023009138A1 (en) Near-field communication overlay
Faigl et al. External localization system for mobile robotics

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20187018065

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018533757

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2017897209

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017404562

Country of ref document: AU

Date of ref document: 20170630

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017897209

Country of ref document: EP

Effective date: 20180828

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17897209

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE