WO2018176668A1 - 机器人的避障控制系统、方法、机器人及存储介质 - Google Patents
机器人的避障控制系统、方法、机器人及存储介质 Download PDFInfo
- Publication number
- WO2018176668A1 WO2018176668A1 PCT/CN2017/091368 CN2017091368W WO2018176668A1 WO 2018176668 A1 WO2018176668 A1 WO 2018176668A1 CN 2017091368 W CN2017091368 W CN 2017091368W WO 2018176668 A1 WO2018176668 A1 WO 2018176668A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- obstacle
- model
- distance
- obstacle avoidance
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000004364 calculation method Methods 0.000 claims description 56
- 238000004422 calculation algorithm Methods 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 13
- 238000012216 screening Methods 0.000 claims description 13
- 230000001154 acute effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000005484 gravity Effects 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 10
- 238000012935 Averaging Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 18
- 238000000354 decomposition reaction Methods 0.000 description 11
- 238000007781 pre-processing Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 3
- 230000000386 athletic effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
- B25J9/1676—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0255—Control of position or course in two dimensions specially adapted to land vehicles using acoustic signals, e.g. ultra-sonic singals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0891—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
Definitions
- the present invention relates to the field of computer technologies, and in particular, to an obstacle avoidance control system, method, robot, and storage medium for a robot.
- autonomous mobile robots can be widely used in many scenes, such as guiding the exhibition hall, leading visitors from one exhibition area to another; the service of the restaurant, actively welcome guests, and lead guests to vacant meals. ; guidance and patrol work in public places, moving along the route set by the program, someone needs help to stop answering questions and so on. In these scenarios, how to avoid the obstacles that the robot collides with in the environment is an important technical problem.
- autonomous mobile robots rely on their own sensors to locate and avoid obstacles.
- the industry's usual obstacle avoidance scheme is to install proximity sensors (such as ultrasonic, infrared, laser, etc.) on the robot, if the robot detects a certain distance from the obstacle. For distances (such as 10cm), obstacle avoidance is performed.
- the existing obstacle avoidance scheme has the following disadvantages: first, the obstacle can only be detected on the plane of the height of the sensor. For the case of a four-legged table, if the height of the sensor is 30 cm and the height of the desktop is 60 cm, the sensor cannot An obstacle is detected, which eventually causes the robot to hit the table. Second, the obstacle can only be detected in the direction in which the sensor is installed. If there is no sensor behind the robot, the back will cause an obstacle.
- the main object of the present invention is to provide a robot obstacle avoidance control system, method, robot and storage medium, which are intended to effectively control robot obstacle avoidance.
- a first aspect of the present application provides an obstacle avoidance control system for a robot, where the obstacle avoidance control system includes:
- a determining module configured to acquire current positioning data of the robot in real time or timing, and determine whether there is a current positioning position from the current positioning position to the target position path according to the current positioning data and the predetermined position data of each obstacle in the moving area.
- a calculation module configured to calculate a robot according to the acquired positioning data, a predetermined 3D model of the robot, and a predetermined 3D model of the obstacle if the distance from the current positioning position is less than the preset distance The shortest distance of the obstacle;
- the control module is configured to calculate a direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
- the second aspect of the present application further provides a robot obstacle avoidance method, the method comprising the following steps:
- A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
- the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
- A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
- a third aspect of the present application provides a robot including a processor and a memory, wherein the memory stores an obstacle avoidance control system of the robot, and the obstacle avoidance control system of the robot can be executed by the processor to implement the following steps:
- A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
- the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
- A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
- a fourth aspect of the present application provides a computer readable storage medium having an obstacle avoidance control system of a robot stored thereon, the obstacle avoidance control system of the robot being executable by at least one processor to implement the following steps:
- A1 real-time or timing acquisition of the current positioning data of the robot, and determining whether the distance from the current positioning position to the current positioning position is less than the pre-determination according to the current positioning data and the position data of each obstacle in the predetermined moving area.
- the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. Shortest distance
- A3. Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
- the obstacle avoidance control system, method, robot and storage medium of the robot proposed by the present invention when the obstacle is detected by the current positioning data of the robot and the distance from the current positioning position is less than the preset distance, according to the current positioning data of the robot, the advance
- the determined 3D model of the robot and the predetermined 3D model of the obstacle calculate the shortest distance between the robot and the obstacle in the three-dimensional space, and calculate the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
- FIG. 1 is a schematic flow chart of an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention
- FIG. 2a is a schematic diagram showing the fan-shaped equalization of an obstacle 3D model in an embodiment of the robot obstacle avoidance method according to the present invention
- 2b is a schematic diagram of a sector model portion labeled k in an embodiment of a robot obstacle avoidance method according to the present invention
- 3a is a schematic diagram of a 3D model of a robot and an obstacle in an embodiment of a robot obstacle avoidance method according to the present invention
- FIG. 3b is a schematic diagram showing fan-shaped equalization of a cubic obstacle model in an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention
- 3c is a schematic diagram of screening a model part in an embodiment of a robot obstacle avoidance method according to the present invention.
- 3d is a schematic diagram of calculating a shortest distance vector in an embodiment of a robot obstacle avoidance method according to the present invention
- FIG. 4 is a schematic diagram of determining an effective occlusion region in an embodiment of a robot obstacle avoidance method according to the present invention
- FIG. 5 is a schematic diagram of an operating environment of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
- FIG. 6 is a functional block diagram of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
- the invention provides a robot obstacle avoidance method.
- FIG. 1 is a schematic flow chart of an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention.
- the robot obstacle avoidance method comprises:
- Step S10 The obstacle avoidance control system of the robot acquires current positioning data of the robot (for example, position, posture, etc. in the room) in real time or timing (for example, every 2 seconds), and according to current positioning data and each predetermined moving area
- the position data of the obstacle determines whether there is an obstacle in the current positioning position to the target position path that is less than the preset distance from the current positioning position.
- the robot's own sensor can be used to locate and determine the distance from each obstacle in the predetermined moving area.
- a proximity sensor for example, ultrasonic, infrared, laser, etc.
- Step S20 If there is an obstacle whose distance from the current positioning position is less than the preset distance, calculate the robot and the obstacle according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. The shortest distance of the object.
- the path along the target position is continuously moved and detected in real time or time.
- the distance between the robot and the obstacles in the moving area If it is determined that there is an obstacle whose distance from the current positioning position is less than the preset distance, the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle.
- the distance is used to determine whether the obstacle collides with the obstacle when moving along the target position path in the three-dimensional space, thereby realizing not only the obstacle can be detected at the height plane of the robot sensor but also the three-dimensional space can be detected.
- the potential obstacles in the three-dimensional space can detect potential obstacles in all directions in the three-dimensional space in the direction in which the robot is mounted with the sensor and the other direction in which the robot is not mounted.
- the 3D model of the predetermined robot and the 3D model of each obstacle in the moving area may be pre-stored in the storage unit of the robot, or may be accessed by the robot through the wireless communication unit to access the Internet of Things system server. Make a limit.
- Step S30 Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid all directions in the three-dimensional space.
- the model calculates the shortest distance between the robot and the obstacle in three-dimensional space, and calculates the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
- step S20 includes:
- Step S201 Pre-processing a 3D model of the predetermined robot and a 3D model of the obstacle.
- Step S202 Calculate the shortest distance between the robot and the obstacle by using the predetermined distance calculation rule for the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data.
- the 3D model of the robot and the obstacle can be pre-processed, for example, converted into a convex body, so that the shortest distance can be calculated more accurately and quickly.
- the robot 3D model preprocessing in step S201 includes: for each joint of the robot, directly using a predetermined algorithm (for example, QuickHull fast convex hull algorithm) to find the smallest convex polyhedron surrounding each joint, so as to The robot non-convex model is transformed into a convex model.
- a predetermined algorithm for example, QuickHull fast convex hull algorithm
- the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
- the first one constructing a convex bounding box of non-convex polyhedrons to make it Changed to convex body for collision detection
- second convex decomposition of non-convex polyhedron, transforming non-convex model into multiple convex bodies for collision detection
- third equalization of obstacle 3D model sector (ie, fan-shaped section Sub-), then the convex decomposition of the single fan after the equalization, this first fan-shaped equalization and convex decomposition is not only faster than the first two, but also the calculation accuracy is higher.
- the step of fanning the obstacle 3D model includes:
- n sector portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
- the spherical center O makes a line L that coincides with the z-axis in the three-dimensional coordinate system Oxyz, then the xoz plane is the initial sector-shaped equal-division plane, and the xoz plane is ⁇ 1 , and ⁇ 1 divides the obstacle 3D model into two parts. ;
- X3 select ⁇ 1 around the straight line L by a certain angle ⁇ ( ⁇ represents the adjacent fan-shaped declination) to obtain another new plane ⁇ 2 , and continue to rotate the new plane ⁇ to obtain the plane ⁇ 3 , and rotate m-1 times to obtain the mth Plane ⁇ n ;
- m planes can divide the spherical bounding box B into 2m parts, and the obstacle 3D model is divided into 2m model parts.
- step of performing convex decomposition on the evenly divided single sectors includes:
- the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
- the predetermined distance calculation rule includes:
- each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
- the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
- GJK algorithm a predetermined distance calculation algorithm
- FIG. 2a is a schematic diagram showing the fan-shaped equalization of the obstacle 3D model in an embodiment of the robot obstacle avoidance method.
- 2b is a schematic diagram of a sector model portion labeled k in an embodiment of the robot obstacle avoidance method of the present invention.
- the predetermined screening algorithm includes:
- the n model parts obtained by fanning the obstacle 3D model are respectively used as the n nodes of the obstacle, and the key-value key values are respectively established with respect to the initial sector-shaped equal plane (ie, the xoz plane).
- Hash(i) i*(360°/n)
- Hash(i) represents the off-angle of the sector model portion labeled i and the X-axis positive axis of the obstacle coordinate system
- T i A 0 A 1 A 2 ...A i-1 A i
- the real-time updated value Q i (x, y, z) of the origin coordinate of each joint local coordinate system during the motion of the robot is calculated by T i , and the declination ⁇ of the joint in the obstacle coordinate system can be further obtained:
- Q i (x, y, z) represents the coordinates of the robot joint in the robot coordinate system
- T r represents the transformation matrix of the robot coordinate system transformed into the obstacle coordinate system (a 4*4 matrix, robot coordinate system and obstacles) The object coordinate system has been determined, the matrix can be directly calculated), then the coordinates Q i (x t , y t , z t ) of the robot joint in the obstacle coordinate system are:
- the declination of the joint in the obstacle coordinate system is ⁇
- the corresponding label can be calculated according to the hash function Hash(i) representing the declination mapping relationship.
- the fan model part screens out the part of the model to be distance calculated. For example, if the calculated sector model portion has the label k, the sector model in the range of [k-M, k+N] can be selected for the shortest distance calculation. Where M and N are a preset value, and a plurality of sector model parts near the sector model part with the label k are selected as the model part to be subjected to the shortest distance calculation.
- FIG. 3a is a schematic diagram of a 3D model of a robot and an obstacle in an embodiment of the robot obstacle avoidance method according to the present invention.
- the robot adopts a robot with only the movement of the chassis and no other joints such as an arm.
- the 3D model of the robot adopts a 3D model of a robot with a height of 1500 mm and a motion chassis radius of 320 mm, and the obstacle 3D model adopts a simple cube.
- the model has a size of 2200mm*2200mm*1000mm.
- the current coordinates of the robot in the obstacle model coordinate system are (1800, -100).
- FIG. 3b is a schematic diagram showing fan-shaped equalization of a cubic obstacle model according to an embodiment of the robot obstacle avoidance method.
- the pre-processing is mainly to divide the obstacle model into fan-shaped, as shown in Fig. 3b, the obstacle model is divided into 32 parts by fan shape, and the model of fan-shaped mean is inversed from the X-axis.
- FIG. 3c is a schematic diagram of screening a model part in an embodiment of a robot obstacle avoidance method according to an embodiment of the present invention.
- the robot used in this embodiment only has the movement of the chassis, and there are no other sports joints such as arms, so the chassis posture represents the overall posture of the robot, and the current robot position is (1800, -100).
- FIG. 3d is a schematic diagram of calculating a shortest distance vector in an embodiment of a robot obstacle avoidance method according to the present invention.
- the range of the obstacle block (1, 2, 31, 32) has been reduced, and the shortest distance between the robot and the obstacle is directly calculated by the GJK algorithm, as shown in Fig. 3d, respectively.
- step S30 includes:
- the obstacle avoidance control system of the robot analyzes whether the obstacle avoidance is needed according to the calculated shortest distance; if the calculated shortest distance is greater than the preset distance threshold, it is determined that the obstacle avoidance is not required, or if the calculated shortest distance is less than or equal to the preset distance threshold, Then determine the need to avoid obstacles. If it is determined that obstacle avoidance is needed, the obstacle avoidance control system of the robot calculates the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and controls the robot according to the calculated motion direction. Athletic posture.
- the step of calculating the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle includes:
- a first preset type obstacle avoidance parameter for example, a virtual repulsive force
- a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
- FIG. 4 is a schematic diagram of determining an effective occlusion region in an embodiment of the robot obstacle avoidance method according to the present invention.
- the predetermined projection analysis rule is:
- the P 1 position point of the coordinate system plane represent the current position of the robot
- the P 2 position point represents the position of the target point, that is, the target position
- the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
- the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
- the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
- the first preset type obstacle avoidance parameter is a virtual repulsion
- the second preset type obstacle avoidance parameter is a virtual gravity
- the first preset is determined according to the calculated shortest distance and the area of the effective occlusion area.
- the obstacle avoidance parameter is determined according to the distance between the target position and the current positioning position of the robot
- the second preset type obstacle avoidance parameter is determined according to the first preset type obstacle avoidance parameter and the second preset type obstacle avoidance parameter.
- the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
- the first calculation rule is:
- the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
- the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
- the relationship is:
- k r b r represents a preset virtual repulsion coefficient
- s 0 represents a preset effective occlusion area area threshold, s 0 >0
- d 0 represents a preset distance threshold, d 0 >0
- virtual repulsion direction (ie Direction) is the same as the shortest distance.
- the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
- the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
- k t represents the preset gravitational coefficient
- d t represents the distance between the target position and the current positioning position of the robot
- the virtual gravitational direction (ie Direction) towards the target position ie Direction
- the invention further provides an obstacle avoidance control system for a robot.
- FIG. 5 is a schematic diagram of an operating environment of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
- the obstacle avoidance control system 10 is installed and operated in the robot 1.
- the robot 1 may include, but is not limited to, a memory 11, a processor 12, and a display 13.
- Figure 1 shows only the robot 1 with components 11-13, but it should be understood that not all illustrated components may be implemented, and more or fewer components may be implemented instead.
- the memory 11 includes a memory and at least one type of readable storage medium.
- the memory provides a cache for the operation of the robot 1;
- the readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
- the readable storage medium may be an internal storage unit of the robot 1, such as a hard disk or memory of the robot 1.
- the readable storage medium may also be an external storage device of the robot 1, such as a plug-in hard disk equipped on the robot 1, a smart memory card (SMC), and security.
- the readable storage medium of the memory 11 is generally used to store application software and various types of data installed in the robot 1, such as program codes of the obstacle avoidance control system 10.
- the memory 11 can also be used to temporarily store data that has been output or is about to be output.
- the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11.
- the processor 12 executes the obstacle avoidance control system 10 to implement any of the steps of the above-described robot obstacle avoidance method.
- the display 13 in some embodiments may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch sensor, or the like.
- the display 13 is used to display information processed in the robot 1 and a user interface for displaying visualization, such as an application menu boundary Face, application icon interface, etc.
- the components 11-13 of the robot 1 communicate with one another via a system bus.
- FIG. 6 is a functional block diagram of a preferred embodiment of the obstacle avoidance control system 10 of the present invention.
- the obstacle avoidance control system 10 can be divided into a determination module 01, a calculation module 02, and a control module 03.
- module refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the obstacle avoidance control system 10 in the robot 1.
- the processor 12 executes the computer program instruction segments of the modules of the obstacle avoidance control system 10
- any step of the above-described robot obstacle avoidance method can be realized based on the operations and functions that can be realized by the respective computer program instruction segments.
- the following description will specifically describe the operations and functions implemented by the determining module 01, the computing module 02, and the control module 03.
- the determining module 01 is configured to acquire current positioning data of the robot (for example, position, posture, etc. in the room) in real time or timing (for example, every 2 seconds), and according to current positioning data and predetermined obstacles in the moving area.
- the position data of the object determines whether there is an obstacle in the current positioning position to the target position path that is less than the preset distance from the current positioning position.
- the robot's own sensor can be used to locate and determine the distance from each obstacle in the predetermined moving area.
- a proximity sensor for example, ultrasonic, infrared, laser, etc.
- the calculating module 02 is configured to calculate, according to the acquired positioning data, a predetermined 3D model of the robot, and a predetermined 3D model of the obstacle, if there is an obstacle whose distance from the current positioning position is less than the preset distance. The shortest distance between the robot and the obstacle.
- the path along the target position is continuously moved and detected in real time or time.
- the distance between the robot and the obstacles in the moving area If it is determined that there is an obstacle whose distance from the current positioning position is less than the preset distance, the robot and the obstacle are calculated according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle.
- the distance is used to determine whether the obstacle collides with the obstacle when moving along the target position path in the three-dimensional space, thereby realizing not only the obstacle can be detected at the height plane of the robot sensor but also the three-dimensional space can be detected.
- the potential obstacles in the three-dimensional space can detect potential obstacles in all directions in the three-dimensional space in the direction in which the robot is mounted with the sensor and the other direction in which the robot is not mounted.
- the 3D model of the predetermined robot and the 3D model of each obstacle in the moving area may be pre-stored in the storage unit of the robot, or may be accessed by the robot through the wireless communication unit to access the Internet of Things system server. Make a limit.
- the control module 03 is configured to calculate a direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid Potential obstacles in all directions in the three-dimensional space, effectively controlling the obstacle avoidance when the robot moves along the path of the target position.
- the model calculates the shortest distance between the robot and the obstacle in three-dimensional space, and calculates the direction in which the current robot should move to control the motion posture of the robot. Since the movement direction of the robot can be controlled by the shortest distance between the robot and the obstacle in the three-dimensional space, the obstacles in all directions of the three-dimensional space can be detected and avoided, and the obstacle avoidance of the robot can be effectively controlled.
- calculation module 02 is further configured to:
- Pre-determining the 3D model of the predetermined robot and the 3D model of the obstacle calculating the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data using a predetermined distance calculation
- the rule calculates the shortest distance between the robot and the obstacle.
- the 3D model of the robot and the obstacle can be pre-processed, for example, converted into a convex body, so that the shortest distance can be calculated more accurately and quickly.
- the calculation module 02 is further configured to: for each joint of the robot, directly use a predetermined algorithm (for example, a QuickHull fast convex hull algorithm) to find a minimum convex polyhedron surrounding each joint to The non-convex model is transformed into a convex model.
- a predetermined algorithm for example, a QuickHull fast convex hull algorithm
- the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
- the first one constructing a convex bounding box of a non-convex polyhedron to convert it into a convex body for collision detection
- the second performing convex decomposition on a non-convex polyhedron to make a non-convex model Converted into multiple convex bodies for collision detection
- third equalizes the fan-shaped 3D model of the obstacle (ie, fan-shaped), and then performs convex decomposition on the evenly divided single sector.
- This first fan-shaped equalization and convex decomposition Compared with the first two methods, the calculation method is faster and the calculation accuracy is higher.
- calculation module 02 is further configured to:
- a spherical bounding box of the obstacle-shaped 3D model to be fan-shaped, finding a spherical center of the spherical bounding box; setting an initial fan-shaped equalizing plane passing through the spherical center, and pressing the initial fan-shaped equalizing plane according to a preset
- the rotation angle is rotated a plurality of times around the center of the sphere to divide the spherical bounding box into n sector-shaped portions; the n sector-shaped portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
- the spherical center O makes a line L that coincides with the z-axis in the three-dimensional coordinate system Oxyz, then the xoz plane is the initial sector-shaped equal-division plane, and the xoz plane is ⁇ 1 , and ⁇ 1 divides the obstacle 3D model into two parts. ;
- X3 select ⁇ 1 around the straight line L by a certain angle ⁇ ( ⁇ represents the adjacent fan-shaped declination) to obtain another new plane ⁇ 2 , and continue to rotate the new plane ⁇ to obtain the plane ⁇ 3 , and rotate m-1 times to obtain the mth Plane ⁇ n ;
- m planes can divide the spherical bounding box B into 2m parts, and the obstacle 3D model is divided into 2m model parts.
- calculation module 02 is further configured to:
- the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
- the predetermined distance calculation rule includes:
- each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
- the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
- GJK algorithm a predetermined distance calculation algorithm
- the predetermined screening algorithm comprises:
- the n model parts obtained by fanning the obstacle 3D model are respectively used as the n nodes of the obstacle, and the key-value key values are respectively established with respect to the initial sector-shaped equal plane (ie, the xoz plane).
- Hash(i) i*(360°/n)
- Hash(i) represents the off-angle of the sector model portion labeled i and the X-axis positive axis of the obstacle coordinate system
- T i A 0 A 1 A 2 ...A i-1 A i
- the real-time updated value Q i (x, y, z) of the origin coordinate of each joint local coordinate system during the motion of the robot is calculated by T i , and the declination ⁇ of the joint in the obstacle coordinate system can be further obtained:
- Q i (x, y, z) represents the coordinates of the robot joint in the robot coordinate system
- T r represents the transformation matrix of the robot coordinate system transformed into the obstacle coordinate system (a 4*4 matrix, robot coordinate system and obstacles) The object coordinate system has been determined, the matrix can be directly calculated), then the coordinates Q i (x t , y t , z t ) of the robot joint in the obstacle coordinate system are:
- the declination of the joint in the obstacle coordinate system is ⁇
- the corresponding label can be calculated according to the hash function Hash(i) representing the declination mapping relationship.
- the fan model part screens out the part of the model to be distance calculated. For example, if the calculated sector model portion has the label k, the sector model in the range of [k-M, k+N] can be selected for the shortest distance calculation. Where M and N are a preset value, and a plurality of sector model parts near the sector model part with the label k are selected as the model part to be subjected to the shortest distance calculation.
- the robot adopts a robot with only the movement of the chassis and no other joints such as an arm.
- the 3D model of the robot uses a 3D model of a robot with a height of 1500 mm and a motion chassis radius of 320 mm, and an obstacle.
- the 3D model uses a simple cube model with dimensions of 2200mm*2200mm*1000mm.
- the current coordinates of the robot in the obstacle model coordinate system are (1800, -100).
- the robot used in this embodiment only has the movement of the chassis, and there are no other sports joints such as arms, so the chassis posture represents the overall posture of the robot, and the current robot position is (1800, -100).
- the whole gets 32, so the corresponding sector block number 32 to be distance calculated, that is to say the robot is closest to the obstacle block numbered 32.
- the obstacle block range is [31,34], number More than 32 need to do a simple conversion, 33 is converted to an obstacle block numbered 1 and 34 is converted to an obstacle block numbered 2; as shown in Figure 3c, the final selection number is 31, 32, 1, 2 The obstacle block is calculated for the shortest distance.
- the range of the obstacle block (1, 2, 31, 32) has been reduced, and the shortest distance between the robot and the obstacle is directly calculated by the GJK algorithm, as shown in Fig. 3d, respectively.
- control module 03 is further configured to:
- the direction of the current robot should be calculated according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and the motion posture of the robot is controlled according to the calculated motion direction.
- control module 03 is further configured to:
- a first preset type obstacle avoidance parameter for example, a virtual repulsive force
- a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
- the predetermined projection analysis rule is:
- the P 1 position point of the coordinate system plane represent the current position of the robot
- the P 2 position point represents the position of the target point, that is, the target position
- the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
- the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
- the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
- the control module 03 is further configured to:
- the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
- the first calculation rule is:
- the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
- the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
- the relationship is:
- k r b r represents a preset virtual repulsion coefficient
- s 0 represents a preset effective occlusion area area threshold, s 0 >0
- d 0 represents a preset distance threshold, d 0 >0
- virtual repulsion direction (ie Direction) is the same as the shortest distance.
- the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
- the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
- the virtual gravitational force of the target position to the robot where k t represents the preset gravitational coefficient, d t represents the distance between the target position and the current positioning position of the robot, and the virtual repulsive force direction (ie Direction) towards the target position.
- the present invention also provides a computer readable storage medium.
- the computer readable storage medium stores an obstacle avoidance control system of the robot, and the obstacle avoidance control system of the robot can be executed by at least one processor to:
- Step S10 Obtain current positioning data of the robot in real time or timing, and determine whether the distance from the current positioning position to the current positioning position is less than the current positioning position according to the current positioning data and the position data of each obstacle in the predetermined moving area. An obstacle of a preset distance;
- Step S20 If there is an obstacle whose distance from the current positioning position is less than the preset distance, calculate the robot and the obstacle according to the acquired positioning data, the predetermined 3D model of the robot, and the predetermined 3D model of the obstacle. The shortest distance of the object;
- Step S30 Calculate the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle, and control the motion posture of the robot according to the calculated motion direction to avoid the obstacle.
- step S20 includes:
- Step S201 Pre-processing a 3D model of the predetermined robot and a 3D model of the obstacle.
- Step S202 Calculate the shortest distance between the robot and the obstacle by using the predetermined distance calculation rule for the acquired positioning data, the pre-processed robot 3D model data, and the pre-processed obstacle 3D model data.
- the robot 3D model preprocessing in step S201 includes: for each joint of the robot, directly using a predetermined algorithm (for example, QuickHull fast convex hull algorithm) to find the smallest convex polyhedron surrounding each joint, so as to The robot non-convex model is transformed into a convex model.
- a predetermined algorithm for example, QuickHull fast convex hull algorithm
- the robot 3D model processed by the above convexity can effectively improve the calculation speed and the calculation accuracy when the shortest distance vector is subsequently calculated.
- the first one constructing a convex bounding box of a non-convex polyhedron to convert it into a convex body for collision detection
- the second performing convex decomposition on a non-convex polyhedron to make a non-convex model Converted into multiple convex bodies for collision detection
- third equalizes the fan-shaped 3D model of the obstacle (ie, fan-shaped), and then performs convex decomposition on the evenly divided single sector.
- This first fan-shaped equalization and convex decomposition Compared with the first two methods, the calculation method is faster and the calculation accuracy is higher.
- the step of fanning the obstacle 3D model includes:
- n sector portions of the spherical bounding box serve as n model portions of the obstacle 3D model.
- step of performing convex decomposition on the evenly divided single sectors includes:
- the Delaunay triangulation algorithm is used to triangulate the obstacle 3D model to produce a set of triangular patches (tabs); and the corresponding bumps are constructed for each triangular patch. For example, a triangular face piece having a thickness of zero is stretched by a predetermined thickness in the direction of its plane normal vector to become a bump.
- the predetermined distance calculation rule includes:
- each model part obtained by dividing the 3D model of the obstacle is screened, and the model part to be distance calculated is screened;
- the shortest distance between the robot and the selected model part is calculated by using a predetermined distance calculation algorithm (for example, GJK algorithm), which is the robot and the obstacle 3D model.
- GJK algorithm a predetermined distance calculation algorithm
- step S30 includes:
- the obstacle avoidance control system of the robot analyzes whether the obstacle avoidance is needed according to the calculated shortest distance; if the calculated shortest distance is greater than the preset distance threshold, it is determined that the obstacle avoidance is not required, or if the calculated shortest distance is less than or equal to the preset distance threshold, Then determine the need to avoid obstacles. If it is determined that obstacle avoidance is needed, the obstacle avoidance control system of the robot calculates the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance and the 3D model of the obstacle, and controls the robot according to the calculated motion direction. Athletic posture.
- the step of calculating the direction in which the current robot should move according to the acquired positioning data, the calculated shortest distance, and the 3D model of the obstacle includes:
- a first preset type obstacle avoidance parameter for example, a virtual repulsive force
- a second preset type obstacle avoidance parameter for example, the distance between the target point position and the current positioning position of the robot
- the predetermined projection analysis rule is:
- the P 1 position point of the coordinate system plane represent the current position of the robot
- the P 2 position point represents the position of the target point, that is, the target position
- the projection area P3 represents the projection of the obstacle 3D model in the coordinate system plane, and the coordinates Connect P 1 P 2 in the plane to get a straight line J;
- the projection region P3 (e.g. S 1 or S 2 Find any point P S in the area , pass P S as the perpendicular of the line J , and the intersection of the perpendicular line and the line J is P J , and then get the vector Calculate the vector of the shortest distance And vector
- the angle ⁇ if ⁇ is an acute angle, determines that the region where the P S point is located is an effective occlusion region (for example, the effective occlusion projection region S 2 in FIG. 4 ), or if ⁇ is not an acute angle, it is determined that the region where the P S point is located is not Effective occlusion area.
- the first preset type obstacle avoidance parameter is a virtual repulsion
- the second preset type obstacle avoidance parameter is a virtual gravity
- the first preset is determined according to the calculated shortest distance and the area of the effective occlusion area.
- the obstacle avoidance parameter is determined according to the distance between the target position and the current positioning position of the robot
- the second preset type obstacle avoidance parameter is determined according to the first preset type obstacle avoidance parameter and the second preset type obstacle avoidance parameter.
- the resultant direction of the virtual gravitational force and the virtual repulsive force is calculated, and the resultant force direction is the direction in which the robot should currently move.
- the first calculation rule is:
- the vector of the shortest distance between the robot and the obstacle, S is the area of the effective occlusion area
- the virtual repulsion of the obstacle to the robot is converted into a relation of the obstacle to the virtual repulsion of the robot.
- the relationship is:
- k r b r represents a preset virtual repulsion coefficient
- s 0 represents a preset effective occlusion area area threshold, s 0 >0
- d 0 represents a preset distance threshold, d 0 >0
- virtual repulsion direction (ie Direction) is the same as the shortest distance.
- the obstacle avoidance when the robot is far away from the obstacle, the obstacle avoidance is not performed when the set distance threshold d 0 is exceeded.
- the size of the obstacle is 0; entering the obstacle avoidance distance range (the shortest distance is less than d 0 ), when the area s of the effective occlusion area is relatively large, exceeding the set value s 0 , Will make When it is too long, it can be used to avoid obstacles and avoid obstacles in advance to avoid large obstacles.
- the virtual gravitational force of the target position to the robot where k t represents the preset gravitational coefficient, d t represents the distance between the target position and the current positioning position of the robot, and the virtual repulsive force direction (ie Direction) towards the target position.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and can also be implemented by hardware, but in many cases, the former is A better implementation.
- the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Acoustics & Sound (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims (20)
- 一种机器人的避障控制系统,其特征在于,所述避障控制系统包括:确定模块,用于实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;计算模块,用于若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;控制模块,用于根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
- 如权利要求1所述的机器人的避障控制系统,其特征在于,所述计算模块还用于:对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理,针对机器人的每一个关节,利用预先确定的算法找出包围各个关节的最小凸多面体,以将机器人的非凸模型转换为凸体模型;对该障碍物3D模型扇形均分,并对均分后的单个扇形进行凸分解;对获取的定位数据、预处理后的机器人3D模型数据及预处理后的该障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与该障碍物的最短距离。
- 如权利要求2所述的机器人的避障控制系统,其特征在于,所述计算模块还用于:建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
- 如权利要求2所述的机器人的避障控制系统,其特征在于,所述预先确定的距离计算规则包括:根据机器人当前的定位数据及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法计算出机器人与筛选出的模型部分的最短距离,该最短距离为机器人与障碍物的最短距离;所述预先确定的筛选算法包括:将障碍物3D模型的n个模型部分分别作为障碍物的n个节点,建立键值为相对于初始扇形均分平面的偏角的哈希表,以进行模型节点管理;对各个模型部分进行标号,根据标号,建立标号为i的模型部分的偏角映射关系,定义所述偏角映射关系的哈希函数为:Hash(i)=i*(360°/n)其中,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;建立机器人的运动学方程,根据建立的运动学方程计算出机器人各个关节的位姿,该运动学方程为:Ti=A0A1A2…Ai-1Ai其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵,A0为机器人当前位置矩阵,Ti为第i个关节相对于机器人坐标系的位姿;通过Ti计算出机器人运动过程中各个关节在机器人坐标系下的坐标Qi(x,y,z),并计算出机器人坐标系变换到障碍物坐标系的变换矩阵Tr,则机器人关节在障碍物坐标系下的坐 标Qi(xt,yt,zt)为:Qi(xt,yt,zt)=TrQi(x,y,z)通过如下公式得到关节在障碍物坐标系下的偏角α:根据偏角α及哈希函数Hash(i)计算得到对应标号的模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。
- 如权利要求1所述的机器人的避障控制系统,其特征在于,所述控制模块还用于:将机器人及该障碍物3D模型投影到同一坐标系平面中;根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物3D模型的投影相对于机器人的当前定位位置及目标位置有效遮挡区域的面积;根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向。
- 如权利要求5所述的机器人的避障控制系统,其特征在于,所述预先确定的投影分析规则为:设定坐标系平面的P1位置点为机器人的当前定位位置,P2位置点为目标位置,投影区域P3为障碍物3D模型在所述坐标系平面中的投影,并在所述坐标系平面中连接P1P2,得到直线J;若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
- 如权利要求5所述的机器人的避障控制系统,其特征在于,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述控制模块还用于:根据计算的最短距离和有效遮挡区域的面积,利用第一计算规则计算出作用在机器人上的虚拟斥力;根据当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的虚拟引力;计算出该虚拟引力和虚拟斥力的合力方向作为机器人当前应运动的方向。
- 一种机器人避障方法,其特征在于,所述方法包括以下步骤:A1、实时或者定时获取机器人当前的定位数据,并根据当前定位数据及预先确定的移动区域内各个障碍物的位置数据,确定当前定位位置至目标位置路径中是否有离当前定位位置的距离小于预设距离的障碍物;A2、若有离当前定位位置的距离小于预设距离的障碍物,则根据获取的定位数据、预先确定的机器人的3D模型及预先确定的该障碍物的3D模型,计算出机器人与该障碍物的最短距离;A3、根据获取的定位数据、计算的最短距离及该障碍物的3D模型,计算出当前机器人应运动的方向,根据计算出的运动方向控制机器人的运动姿态,以避开障碍物。
- 如权利要求10所述的机器人避障方法,其特征在于,所述A2步骤包括:对预先确定的机器人的3D模型及该障碍物的3D模型进行预处理,针对机器人的每一个关节,利用预先确定的算法找出包围各个关节的最小凸多面体,以将机器人的非凸模型转换为凸体模型;对该障碍物3D模型扇形均分,并对均分后的单个扇形进行凸分解;对获取的定位数据、预处理后的机器人3D模型数据及预处理后的该障碍物3D模型数据,利用预先确定的距离计算规则计算出机器人与该障碍物的最短距离。
- 如权利要求11所述的机器人避障方法,其特征在于,所述A2步骤还包括:建立待扇形均分的障碍物3D模型的球形包围盒,找到球形包围盒的球心;设定一经过所述球心的初始扇形均分平面,将所述初始扇形均分平面按预设的旋转角绕所述球心进行多次旋转,以将球形包围盒均分为n个扇形部分,该球形包围盒的n个扇形部分作为障碍物3D模型的n个模型部分。
- 如权利要求11所述的机器人避障方法,其特征在于,所述预先确定的距离计算规则包括:根据机器人当前的定位数据及预先确定的筛选算法,对障碍物3D模型扇形均分后获得的各个模型部分进行筛选,筛选出待进行距离计算的模型部分;对获取的定位数据、筛选出的模型部分,利用预先确定的距离计算算法计算出机器人与筛选出的模型部分的最短距离,该最短距离为机器人与障碍物的最短距离;所述预先确定的筛选算法包括:将障碍物3D模型的n个模型部分分别作为障碍物的n个节点,建立键值为相对于初始扇形均分平面的偏角的哈希表,以进行模型节点管理;对各个模型部分进行标号,根据标号,建立标号为i的模型部分的偏角映射关系,定义所述偏角映射关系的哈希函数为:Hash(i)=i*(360°/n)其中,Hash(i)代表标号为i的扇形模型部分与障碍物坐标系的X轴正轴的偏角;建立机器人的运动学方程,根据建立的运动学方程计算出机器人各个关节的位姿,该 运动学方程为:Ti=A0A1A2…Ai-1Ai其中,Ak(k=1,2,...,i)为机器人关节坐标系之间的齐次变换矩阵,A0为机器人当前位置矩阵,Ti为第i个关节相对于机器人坐标系的位姿;通过Ti计算出机器人运动过程中各个关节在机器人坐标系下的坐标Qi(x,y,z),并计算出机器人坐标系变换到障碍物坐标系的变换矩阵Tr,则机器人关节在障碍物坐标系下的坐标Qi(xt,yt,zt)为:Qi(xt,yt,zt)=TrQi(x,y,z)通过如下公式得到关节在障碍物坐标系下的偏角α:根据偏角α及哈希函数Hash(i)计算得到对应标号的模型部分,并基于对应标号的模型部分筛选出待进行距离计算的模型部分。
- 如权利要求10所述的机器人避障方法,其特征在于,所述A3步骤包括:将机器人及该障碍物3D模型投影到同一坐标系平面中;根据预先确定的投影分析规则及障碍物3D模型投影到所述坐标系平面的投影区域外轮廓各个点的坐标,计算出该障碍物3D模型的投影相对于机器人的当前定位位置及目标位置有效遮挡区域的面积;根据计算的最短距离及有效遮挡区域的面积确定出第一预设类型避障参数,根据目标位置与机器人当前定位位置的距离确定出第二预设类型避障参数,根据所述第一预设类型避障参数及所述第二预设类型避障参数确定出机器人当前应运动的方向。
- 如权利要求14所述的机器人避障方法,其特征在于,所述预先确定的投影分析规则为:设定坐标系平面的P1位置点为机器人的当前定位位置,P2位置点为目标位置,投影区域P3为障碍物3D模型在所述坐标系平面中的投影,并在所述坐标系平面中连接P1P2,得到直线J;若直线J与投影区域P3没有交点或者交点只有一个,则确定不存在有效遮挡区域;
- 如权利要求14所述的机器人避障方法,其特征在于,所述第一预设类型避障参数为虚拟斥力,所述第二预设类型避障参数为虚拟引力,所述A3步骤还包括:根据计算的最短距离和有效遮挡区域的面积,利用第一计算规则计算出作用在机器人上的虚拟斥力;根据当前定位位置与目标点位置的距离,利用第二计算规则计算出作用在机器人上的虚拟引力;计算出该虚拟引力和虚拟斥力的合力方向作为机器人当前应运动的方向。
- 一种机器人,其特征在于,该机器人包括处理器及存储器,该存储器上存储有机器人的避障控制系统,该机器人的避障控制系统可被该处理器执行,以实现如权利要求10至18中所述机器人避障方法中的任一步骤。
- 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有机器人的避障控制系统,该机器人的避障控制系统可被至少一处理器执行,以实现以下操作以实现如权利要求10至18中所述机器人避障方法中的任一步骤。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018533757A JP6716178B2 (ja) | 2017-03-27 | 2017-06-30 | ロボットの障害物回避制御システム、方法、ロボット及びプログラム |
EP17897209.7A EP3410246B1 (en) | 2017-03-27 | 2017-06-30 | Robot obstacle avoidance control system and method, robot, and storage medium |
US16/084,231 US11059174B2 (en) | 2017-03-27 | 2017-06-30 | System and method of controlling obstacle avoidance of robot, robot and storage medium |
KR1020187018065A KR102170928B1 (ko) | 2017-03-27 | 2017-06-30 | 로봇의 장애물 회피 제어 시스템, 방법, 로봇 및 저장매체 |
SG11201809892QA SG11201809892QA (en) | 2017-03-27 | 2017-06-30 | System and method of controlling obstacle avoidance of robot, robot and storage medium |
AU2017404562A AU2017404562B2 (en) | 2017-03-27 | 2017-06-30 | System and method of controlling obstacle avoidance of robot, robot and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710186581.6 | 2017-03-27 | ||
CN201710186581.6A CN107688342B (zh) | 2017-03-27 | 2017-03-27 | 机器人的避障控制系统及方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018176668A1 true WO2018176668A1 (zh) | 2018-10-04 |
Family
ID=61152364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/091368 WO2018176668A1 (zh) | 2017-03-27 | 2017-06-30 | 机器人的避障控制系统、方法、机器人及存储介质 |
Country Status (9)
Country | Link |
---|---|
US (1) | US11059174B2 (zh) |
EP (1) | EP3410246B1 (zh) |
JP (1) | JP6716178B2 (zh) |
KR (1) | KR102170928B1 (zh) |
CN (1) | CN107688342B (zh) |
AU (1) | AU2017404562B2 (zh) |
SG (1) | SG11201809892QA (zh) |
TW (1) | TWI662388B (zh) |
WO (1) | WO2018176668A1 (zh) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046743A (zh) * | 2019-11-21 | 2020-04-21 | 新奇点企业管理集团有限公司 | 一种障碍物信息标注方法、装置、电子设备和存储介质 |
CN111207750A (zh) * | 2019-12-31 | 2020-05-29 | 合肥赛为智能有限公司 | 一种机器人动态路径规划方法 |
CN112148013A (zh) * | 2020-09-25 | 2020-12-29 | 深圳优地科技有限公司 | 机器人避障方法、机器人及存储介质 |
CN112704878A (zh) * | 2020-12-31 | 2021-04-27 | 深圳市其乐游戏科技有限公司 | 集群游戏中的单位位置调整方法、系统、设备及存储介质 |
CN113110594A (zh) * | 2021-05-08 | 2021-07-13 | 北京三快在线科技有限公司 | 控制无人机避障的方法、装置、存储介质及无人机 |
CN113282984A (zh) * | 2021-05-21 | 2021-08-20 | 长安大学 | 一种公共场所人员应急疏散模拟方法 |
CN114859914A (zh) * | 2022-05-09 | 2022-08-05 | 广东利元亨智能装备股份有限公司 | 障碍物检测方法、装置、设备及存储介质 |
CN115890676A (zh) * | 2022-11-28 | 2023-04-04 | 深圳优地科技有限公司 | 机器人控制方法、机器人及存储介质 |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108717302B (zh) * | 2018-05-14 | 2021-06-25 | 平安科技(深圳)有限公司 | 机器人跟随人物方法、装置及存储介质、机器人 |
CN109461185B (zh) * | 2018-09-10 | 2021-08-17 | 西北工业大学 | 一种适用于复杂场景的机器人目标主动避障方法 |
CN109465835A (zh) * | 2018-09-25 | 2019-03-15 | 华中科技大学 | 一种动态环境下双臂服务机器人作业的事前安全预测方法 |
TWI721324B (zh) * | 2018-10-10 | 2021-03-11 | 鴻海精密工業股份有限公司 | 電子裝置及立體物體的判斷方法 |
CN109284574B (zh) * | 2018-10-25 | 2022-12-09 | 西安科技大学 | 一种串联桁架结构体系非概率可靠性分析方法 |
KR102190101B1 (ko) * | 2019-03-08 | 2020-12-11 | (주)아이로텍 | 로봇의 장애물 충돌 회피를 위한 경로 안내툴 표시 방법 및 이를 이용한 충돌 회피 시뮬레이션 시스템 |
CN110442126A (zh) * | 2019-07-15 | 2019-11-12 | 北京三快在线科技有限公司 | 一种移动机器人及其避障方法 |
CN110390829A (zh) * | 2019-07-30 | 2019-10-29 | 北京百度网讯科技有限公司 | 交通信号灯识别的方法及装置 |
CN112445209A (zh) * | 2019-08-15 | 2021-03-05 | 纳恩博(北京)科技有限公司 | 机器人的控制方法和机器人、存储介质及电子装置 |
CN111168675B (zh) * | 2020-01-08 | 2021-09-03 | 北京航空航天大学 | 一种家用服务机器人的机械臂动态避障运动规划方法 |
CN111449666B (zh) * | 2020-03-09 | 2023-07-04 | 北京东软医疗设备有限公司 | 距离监测方法、装置、血管机、电子设备及存储介质 |
JP2021146905A (ja) * | 2020-03-19 | 2021-09-27 | 本田技研工業株式会社 | 制御装置、制御方法およびプログラム |
CN111571582B (zh) * | 2020-04-02 | 2023-02-28 | 上海钧控机器人有限公司 | 一种艾灸机器人人机安全监控系统及监控方法 |
CN111427355B (zh) * | 2020-04-13 | 2023-05-02 | 京东科技信息技术有限公司 | 障碍物数据处理方法、装置、设备及存储介质 |
JP7447670B2 (ja) * | 2020-05-15 | 2024-03-12 | トヨタ自動車株式会社 | 自律移動装置制御システム、その制御方法及びその制御プログラム |
US11292132B2 (en) * | 2020-05-26 | 2022-04-05 | Edda Technology, Inc. | Robot path planning method with static and dynamic collision avoidance in an uncertain environment |
CN111857126A (zh) * | 2020-05-29 | 2020-10-30 | 深圳市银星智能科技股份有限公司 | 一种机器人避障方法、机器人以及存储介质 |
CN111958590B (zh) * | 2020-07-20 | 2021-09-28 | 佛山科学技术学院 | 一种复杂三维环境中机械臂防碰撞方法及系统 |
CN112415532B (zh) * | 2020-11-30 | 2022-10-21 | 上海炬佑智能科技有限公司 | 灰尘检测方法、距离检测装置以及电子设备 |
CN112991527B (zh) * | 2021-02-08 | 2022-04-19 | 追觅创新科技(苏州)有限公司 | 目标对象的躲避方法及装置、存储介质、电子装置 |
CN113119109A (zh) * | 2021-03-16 | 2021-07-16 | 上海交通大学 | 基于伪距离函数的工业机器人路径规划方法和系统 |
CN113459090A (zh) * | 2021-06-15 | 2021-10-01 | 中国农业大学 | 码垛机器人的智能避障方法、电子设备及介质 |
US11753045B2 (en) * | 2021-06-22 | 2023-09-12 | Waymo Llc | Modeling positional uncertainty of moving objects using precomputed polygons |
CN113601497B (zh) * | 2021-07-09 | 2024-02-06 | 广东博智林机器人有限公司 | 一种方法、装置、机器人及存储介质 |
CN113752265B (zh) * | 2021-10-13 | 2024-01-05 | 国网山西省电力公司超高压变电分公司 | 一种机械臂避障路径规划方法、系统及装置 |
KR102563074B1 (ko) * | 2021-10-20 | 2023-08-02 | 금오공과대학교 산학협력단 | 차동 구동형 이동로봇의 운동역학을 고려한 장애물 회피 및 경로 추종방법 |
CN114035569B (zh) * | 2021-11-09 | 2023-06-27 | 中国民航大学 | 一种航站楼载人机器人路径拓展通行方法 |
CN114161047B (zh) * | 2021-12-23 | 2022-11-18 | 南京衍构科技有限公司 | 一种用于增材制造的焊枪头自动避障方法 |
CN114355914B (zh) * | 2021-12-27 | 2022-07-01 | 盐城工学院 | 用于无人船的自主巡航系统及控制方法 |
CN114227694B (zh) * | 2022-01-10 | 2024-05-03 | 珠海一微半导体股份有限公司 | 一种基于地插检测的机器人控制方法、芯片及机器人 |
CN114859904B (zh) * | 2022-04-24 | 2023-04-07 | 汕头大学 | 一种基于e-grn的集群围捕方法、执行装置和系统 |
CN115202350B (zh) * | 2022-07-15 | 2023-06-09 | 盐城工学院 | 一种agv小车的自动运输系统 |
CN115437388B (zh) * | 2022-11-09 | 2023-01-24 | 成都朴为科技有限公司 | 一种全向移动机器人脱困方法和装置 |
CN115507857B (zh) * | 2022-11-23 | 2023-03-14 | 常州唯实智能物联创新中心有限公司 | 高效机器人运动路径规划方法及系统 |
CN116755562B (zh) * | 2023-07-04 | 2024-04-05 | 深圳市仙瞬科技有限公司 | 一种避障方法、装置、介质及ar/vr设备 |
CN117093005B (zh) * | 2023-10-16 | 2024-01-30 | 华东交通大学 | 一种智能汽车自主避障方法 |
CN117207202B (zh) * | 2023-11-09 | 2024-04-02 | 国网山东省电力公司东营供电公司 | 带电作业机器人防碰撞约束控制方法、系统、终端及介质 |
CN117406758B (zh) * | 2023-12-14 | 2024-03-12 | 双擎科技(杭州)有限公司 | 一种机器人避障装置及机器人智能防碰系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101512453A (zh) * | 2006-09-14 | 2009-08-19 | Abb研究有限公司 | 避免工业机器人与物体之间碰撞的方法和设备 |
US7734387B1 (en) * | 2006-03-31 | 2010-06-08 | Rockwell Collins, Inc. | Motion planner for unmanned ground vehicles traversing at high speeds in partially known environments |
US20160111006A1 (en) * | 2014-05-20 | 2016-04-21 | Verizon Patent And Licensing Inc. | User interfaces for selecting unmanned aerial vehicles and mission plans for unmanned aerial vehicles |
CN106227218A (zh) * | 2016-09-27 | 2016-12-14 | 深圳乐行天下科技有限公司 | 一种智能移动设备的导航避障方法及装置 |
CN106406312A (zh) * | 2016-10-14 | 2017-02-15 | 平安科技(深圳)有限公司 | 导览机器人及其移动区域标定方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7164117B2 (en) * | 1992-05-05 | 2007-01-16 | Automotive Technologies International, Inc. | Vehicular restraint system control system and method using multiple optical imagers |
JP3425760B2 (ja) * | 1999-01-07 | 2003-07-14 | 富士通株式会社 | 干渉チェック装置 |
US8840838B2 (en) * | 2011-09-25 | 2014-09-23 | Theranos, Inc. | Centrifuge configurations |
JP2014018912A (ja) | 2012-07-18 | 2014-02-03 | Seiko Epson Corp | ロボット制御装置、ロボット制御方法およびロボット制御プログラムならびにロボットシステム |
JP2014056506A (ja) * | 2012-09-13 | 2014-03-27 | Toyota Central R&D Labs Inc | 障害物検出装置及びそれを備えた移動体 |
US9227323B1 (en) | 2013-03-15 | 2016-01-05 | Google Inc. | Methods and systems for recognizing machine-readable information on three-dimensional objects |
US20150202770A1 (en) * | 2014-01-17 | 2015-07-23 | Anthony Patron | Sidewalk messaging of an autonomous robot |
TWI555524B (zh) | 2014-04-30 | 2016-11-01 | 國立交通大學 | 機器人的行動輔助系統 |
KR101664575B1 (ko) * | 2014-11-07 | 2016-10-10 | 재단법인대구경북과학기술원 | 모바일 로봇의 장애물 회피 시스템 및 방법 |
US10586464B2 (en) * | 2015-07-29 | 2020-03-10 | Warren F. LeBlanc | Unmanned aerial vehicles |
-
2017
- 2017-03-27 CN CN201710186581.6A patent/CN107688342B/zh active Active
- 2017-06-30 KR KR1020187018065A patent/KR102170928B1/ko active IP Right Grant
- 2017-06-30 SG SG11201809892QA patent/SG11201809892QA/en unknown
- 2017-06-30 EP EP17897209.7A patent/EP3410246B1/en active Active
- 2017-06-30 WO PCT/CN2017/091368 patent/WO2018176668A1/zh active Application Filing
- 2017-06-30 US US16/084,231 patent/US11059174B2/en active Active
- 2017-06-30 JP JP2018533757A patent/JP6716178B2/ja active Active
- 2017-06-30 AU AU2017404562A patent/AU2017404562B2/en active Active
- 2017-10-13 TW TW106135246A patent/TWI662388B/zh active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7734387B1 (en) * | 2006-03-31 | 2010-06-08 | Rockwell Collins, Inc. | Motion planner for unmanned ground vehicles traversing at high speeds in partially known environments |
CN101512453A (zh) * | 2006-09-14 | 2009-08-19 | Abb研究有限公司 | 避免工业机器人与物体之间碰撞的方法和设备 |
US20160111006A1 (en) * | 2014-05-20 | 2016-04-21 | Verizon Patent And Licensing Inc. | User interfaces for selecting unmanned aerial vehicles and mission plans for unmanned aerial vehicles |
CN106227218A (zh) * | 2016-09-27 | 2016-12-14 | 深圳乐行天下科技有限公司 | 一种智能移动设备的导航避障方法及装置 |
CN106406312A (zh) * | 2016-10-14 | 2017-02-15 | 平安科技(深圳)有限公司 | 导览机器人及其移动区域标定方法 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046743A (zh) * | 2019-11-21 | 2020-04-21 | 新奇点企业管理集团有限公司 | 一种障碍物信息标注方法、装置、电子设备和存储介质 |
CN111046743B (zh) * | 2019-11-21 | 2023-05-05 | 新奇点智能科技集团有限公司 | 一种障碍物信息标注方法、装置、电子设备和存储介质 |
CN111207750A (zh) * | 2019-12-31 | 2020-05-29 | 合肥赛为智能有限公司 | 一种机器人动态路径规划方法 |
CN112148013A (zh) * | 2020-09-25 | 2020-12-29 | 深圳优地科技有限公司 | 机器人避障方法、机器人及存储介质 |
CN112704878A (zh) * | 2020-12-31 | 2021-04-27 | 深圳市其乐游戏科技有限公司 | 集群游戏中的单位位置调整方法、系统、设备及存储介质 |
CN113110594A (zh) * | 2021-05-08 | 2021-07-13 | 北京三快在线科技有限公司 | 控制无人机避障的方法、装置、存储介质及无人机 |
CN113282984A (zh) * | 2021-05-21 | 2021-08-20 | 长安大学 | 一种公共场所人员应急疏散模拟方法 |
CN114859914A (zh) * | 2022-05-09 | 2022-08-05 | 广东利元亨智能装备股份有限公司 | 障碍物检测方法、装置、设备及存储介质 |
CN115890676A (zh) * | 2022-11-28 | 2023-04-04 | 深圳优地科技有限公司 | 机器人控制方法、机器人及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
SG11201809892QA (en) | 2018-12-28 |
JP6716178B2 (ja) | 2020-07-01 |
CN107688342B (zh) | 2019-05-10 |
EP3410246B1 (en) | 2021-06-23 |
AU2017404562B2 (en) | 2020-01-30 |
US20210078173A1 (en) | 2021-03-18 |
US11059174B2 (en) | 2021-07-13 |
KR20190022435A (ko) | 2019-03-06 |
EP3410246A4 (en) | 2019-11-06 |
TWI662388B (zh) | 2019-06-11 |
CN107688342A (zh) | 2018-02-13 |
TW201835703A (zh) | 2018-10-01 |
JP2019516146A (ja) | 2019-06-13 |
AU2017404562A1 (en) | 2018-10-11 |
KR102170928B1 (ko) | 2020-10-29 |
EP3410246A1 (en) | 2018-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018176668A1 (zh) | 机器人的避障控制系统、方法、机器人及存储介质 | |
AU2018271237B2 (en) | Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot | |
JP6105092B2 (ja) | 光学式文字認識を用いて拡張現実を提供する方法と装置 | |
US20210049360A1 (en) | CONTROLLER GESTURES IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS | |
Wang et al. | Robot manipulator self-identification for surrounding obstacle detection | |
CN110319834B (zh) | 一种室内机器人定位的方法及机器人 | |
CN110874100A (zh) | 用于使用视觉稀疏地图进行自主导航的系统和方法 | |
US11703334B2 (en) | Mobile robots to generate reference maps for localization | |
US10102629B1 (en) | Defining and/or applying a planar model for object detection and/or pose estimation | |
Krajník et al. | External localization system for mobile robotics | |
KR20220076524A (ko) | 증강 현실 효과들의 동적 점진적 열화 | |
US9245366B1 (en) | Label placement for complex geographic polygons | |
US11704881B2 (en) | Computer systems and methods for navigating building information models in an augmented environment | |
JP2021531524A (ja) | 3次元仮想空間モデルを利用したユーザポーズ推定方法および装置 | |
JP7197550B2 (ja) | ビジュアルローカリゼーションとオドメトリに基づく経路追跡方法およびシステム | |
WO2019183928A1 (zh) | 一种室内机器人定位的方法及机器人 | |
US20220237875A1 (en) | Methods and apparatus for adaptive augmented reality anchor generation | |
US11620846B2 (en) | Data processing method for multi-sensor fusion, positioning apparatus and virtual reality device | |
EP3295291B1 (en) | Drawing object inferring system and method | |
US20220129006A1 (en) | Spatial blind spot monitoring systems and related methods of use | |
Meng et al. | Robust 3D Indoor Map Building via RGB-D SLAM with Adaptive IMU Fusion on Robot | |
Jabalameli et al. | Near Real-Time Robotic Grasping of Novel Objects in Cluttered Scenes | |
CN116848493A (zh) | 使用极线约束的光束平差 | |
WO2023009138A1 (en) | Near-field communication overlay | |
Faigl et al. | External localization system for mobile robotics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20187018065 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2018533757 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017897209 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017404562 Country of ref document: AU Date of ref document: 20170630 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017897209 Country of ref document: EP Effective date: 20180828 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17897209 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |