CN112060079B - Robot and collision detection method and device thereof - Google Patents
Robot and collision detection method and device thereof Download PDFInfo
- Publication number
- CN112060079B CN112060079B CN202010752413.0A CN202010752413A CN112060079B CN 112060079 B CN112060079 B CN 112060079B CN 202010752413 A CN202010752413 A CN 202010752413A CN 112060079 B CN112060079 B CN 112060079B
- Authority
- CN
- China
- Prior art keywords
- robot
- point cloud
- height
- cost
- projection area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000004590 computer program Methods 0.000 claims description 17
- 230000008859 change Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 abstract description 4
- 230000004888 barrier function Effects 0.000 abstract description 2
- 230000002349 favourable effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
- B25J9/1666—Avoiding collision or forbidden zones
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
- B25J9/1676—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Manipulator (AREA)
Abstract
The application belongs to the field of robots and discloses a robot and a collision detection method and device thereof, wherein the method comprises the following steps: dividing the robot into a plurality of height intervals, and acquiring a projection area obtained by projecting a robot part corresponding to the height intervals; acquiring a point cloud set of a scene where the robot is located, and obtaining a point cloud subset generated by point cloud set division according to the divided height intervals; projecting the point cloud subset along the height direction of the robot to obtain a corresponding projection area; and performing collision detection according to the projection area corresponding to the robot part in each height interval and the projection area corresponding to the point cloud subset. Keep away the barrier for two-dimensional plane, the risk that the robot removed can be reduced to for three-dimensional collision detection, through dividing into the mode that a plurality of height intervals carried out the calculation, be favorable to reducing three-dimensional detection calculated amount, improve collision detection's real-time.
Description
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and a collision detection method and device thereof.
Background
In the moving process of the robot, a real-time and rapid collision detection technology is a key technology for the robot to avoid obstacles independently. Through real-time quick collision detection, the barrier information in the scene can be acquired more timely, and the requirement of quick movement of the robot is met.
At present, a laser radar scanning mode is usually adopted by a robot to obtain obstacle information of a two-dimensional plane where the robot is located. When collision detection is carried out according to the obstacle information of the two-dimensional plane, the risk of collision still exists for the robot with higher height. The risk of robot collision can be effectively reduced by acquiring the obstacle information of the three-dimensional space of the scene where the robot is located and using the obstacle information of the three-dimensional space to perform three-dimensional collision detection, but the calculation amount of the existing obstacle information of the three-dimensional space is large, and the requirement on the real-time performance of collision detection in planning is difficult to meet.
Disclosure of Invention
In view of this, the embodiment of the application provides a robot and a collision detection method and device thereof, so as to solve the problems that in the prior art, when collision detection is performed, the calculated amount of obstacle information is large, and the real-time requirement on collision detection during planning is difficult to meet.
A first aspect of an embodiment of the present application provides a collision detection method for a robot, where the method includes:
dividing the robot into a plurality of height intervals, and acquiring a projection area obtained by projecting a robot part corresponding to the height intervals;
acquiring a point cloud set of a scene where the robot is located, and obtaining a point cloud subset generated by point cloud set division according to the divided height intervals;
projecting the point cloud subset along the height direction of the robot to obtain a corresponding projection area;
and performing collision detection according to the projection area corresponding to the robot part in each height interval and the projection area corresponding to the point cloud subset.
With reference to the first aspect, in a first possible implementation manner of the first aspect, performing collision detection according to a projection area corresponding to the robot part in each height section and a projection area corresponding to a point cloud subset includes:
generating cost maps corresponding to different height intervals according to the projection area corresponding to the robot part and the projection area corresponding to the point cloud subset, wherein the cost maps are used for representing the possibility that different positions in the maps collide in the height intervals corresponding to the cost maps;
and performing collision detection according to the cost maps of the different height sections.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, performing collision detection according to cost maps of different height sections includes:
acquiring a cost value of a preset position in a corresponding cost map;
selecting the highest cost value of the cost values of the preset positions in the corresponding cost map as the cost value of the preset position, wherein the higher the cost value of a certain position in the cost map is, the higher the probability of collision at the certain position is;
and determining the collision possibility of the preset position according to the cost value corresponding to the preset position.
With reference to the first aspect, in a third possible implementation manner of the first aspect, dividing the robot into a plurality of height intervals includes:
and dividing the robot into a plurality of height sections according to the change information of the shape of the robot in the height direction.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the obtaining a point cloud set of a scene where the robot is located includes:
acquiring a depth image of a scene where the robot is located;
and converting the depth image of the scene where the robot is located into a point cloud set of the scene where the robot is located according to a preset conversion rule.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the acquiring a point cloud set of a scene where the robot is located includes:
acquiring a color image and a depth image of a scene where the robot is located;
and determining a point cloud set of the scene where the robot is located according to the depth image and the color image by combining the robot camera parameters of the acquired images.
In a second aspect, an embodiment of the present application provides a path planning method, where the path planning method includes:
acquiring a current position and a moving target position of the robot in a grid map, and acquiring a cost value of a grid in a cost map according to a first possible implementation manner of the first aspect or a second possible implementation manner of the first aspect;
according to the cost value of the grid, obtaining path cost values corresponding to a plurality of paths moving from the current position of the robot to the target position;
and determining the moving path of the robot according to the distance of the path and the path cost value.
With reference to the second aspect, in a first possible implementation manner of the second aspect, obtaining path cost values corresponding to a plurality of paths moving from the current position of the robot to the target position according to the cost values of the grids includes:
determining a grid set corresponding to a plurality of paths of the robot moving from the current position to the target position respectively;
obtaining a cost value of each grid in the grid set;
and determining the path cost value of the path corresponding to the grid set according to the sum of the cost values of the grids in the grid set.
A third aspect of embodiments of the present application provides a collision detection apparatus of a robot, the apparatus including:
the height interval dividing unit is used for dividing the robot into a plurality of height intervals and acquiring a projection area obtained by projecting a robot part corresponding to the height intervals;
the point cloud dividing unit is used for acquiring a point cloud set of a scene where the robot is located and obtaining a point cloud subset generated by dividing the point cloud set according to the divided height interval;
the point cloud projection unit is used for projecting the point cloud subset along the height direction of the robot to obtain a corresponding projection area;
and the collision detection unit is used for carrying out collision detection according to the projection area corresponding to the robot part in each height section and the projection area corresponding to the point cloud subset.
A third aspect of embodiments of the present application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, performs the steps of the method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: the robot is divided into a plurality of height intervals, a projection area obtained by projecting a robot part corresponding to each height interval is obtained, a point cloud set of a scene where the robot is located is divided according to the height intervals, point cloud subsets corresponding to different height intervals are obtained, the projection area corresponding to the point cloud subsets is obtained, collision detection is carried out according to the projection area corresponding to the robot part of each height interval and the projection area corresponding to the point cloud subsets, and for two-dimensional plane obstacle avoidance, the risk of robot movement can be reduced, and for three-dimensional collision detection, the calculation is carried out by dividing the height intervals, so that the three-dimensional detection calculation amount is favorably reduced, and the real-time performance of collision detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a collision detection method for a robot according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating division of a robot height interval according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of determining a collision relationship between a robot and an obstacle according to an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of an implementation of a method for determining a moving path of a robot according to an embodiment of the present application;
fig. 5 is a schematic diagram of a robot collision detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic view of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The robot in the embodiment of the application can comprise mobile robots used in different industries, in particular to a robot with a high height. For example, the robot may be a service robot used in daily life, or an inspection robot that performs a safety patrol on a designated area, or the like. By the robot collision detection method provided by the embodiment of the application, the mobile robot can quickly and accurately detect the obstacle information of the scene where the robot is located, and the timeliness of the robot for detecting the obstacle collision is improved on the premise of ensuring the moving safety of the robot.
Fig. 1 is a schematic flow chart of an implementation of a collision detection method for a robot according to an embodiment of the present application, which is detailed as follows:
in step S101, the robot is divided into a plurality of height sections, and a projection area projected by a robot part corresponding to the height sections is acquired.
When the robot is divided into a plurality of height sections, the robot may be divided according to shape information of the robot itself. For example, for robots with different functions, functional parts set at different parts of the robot body may be different. There may be robots with a large torso or a serving robot with a tray in front of the chest of the robot to hold items that the user may need to use.
In one way of dividing the height section, the height section included in the robot may be determined based on information on a change in the shape of the robot in the height direction. The shape information may be a size of an outer shape of the robot. In a possible implementation, the shape information may be represented by a size of a radius of a circumscribed circle of the robot cross-section of the corresponding height.
The information on the change in the shape of the robot in the height direction may be obtained by dividing the height section according to the magnitude of the change in the shape information. For example, a threshold value of the amplitude of the variation of the same height section may be set, the detection may be started from the bottom or the top, and when the amplitude of the variation of the same height section is detected to be greater than the threshold value, the shape of the newly detected robot is divided into new height sections.
After the division of the height intervals of the robot is completed, a plurality of height intervals can be obtained, and the change information of the shape of the robot part corresponding to each height interval meets the preset change requirement. Therefore, by dividing the height sections, different collision probabilities can be calculated for the robot parts corresponding to the different height sections for the obstacles at the same distance.
For example, in the robot height section division diagram shown in fig. 2, the robot is divided into three height sections according to the outward spread of the robot in the advancing direction. The divided height intervals include a height interval a, a height interval B, and a height interval C. The robot appearance corresponding to the height section A and the height section C is small, and the robot appearance corresponding to the height section B is large.
In order to facilitate collision detection and reduce the calculated amount of collision detection, robot parts corresponding to different height sections are projected. After the robot is divided into a plurality of height sections, the robot part corresponding to each height section can be projected onto a horizontal plane, or the projection is performed according to the height direction of the robot, so that projection areas corresponding to the robot parts in different height sections are obtained.
In step S102, a point cloud set of a scene where the robot is located is obtained, and a point cloud subset generated by dividing the point cloud set is obtained according to the divided height intervals.
After the projection areas corresponding to the robot parts in different height sections are obtained, in order to facilitate collision detection, it is necessary to obtain obstacle information of the different height sections.
In the embodiment of the application, the obstacle information of the scene where the robot is located can be determined in a manner of acquiring the point cloud set of the scene where the robot is located. And dividing the points in the point cloud set into points belonging to different point cloud subsets according to the heights of the points in the point cloud set.
For example, for a point Xi in the point cloud set, the corresponding height is hi, and the height interval (h) to which hi belongsi-1,hi) The points Xi are divided into height intervals (h)i-1,hi)。
In the embodiment of the application, the point cloud set of the scene where the robot is located can be determined by acquiring the depth image of the scene where the robot is located.
The depth image of the scene where the robot is located can be acquired through a depth sensor, including depth detection equipment such as a depth camera and a laser radar. And calculating the coordinate information of the three-dimensional point corresponding to the pixel in the depth image according to the acquired depth image.
In one implementation, the focal length of the camera acquiring the depth image is fxAnd fyThe center coordinate of the image is (c)x,cy) Then, the coordinates (u, v) of any pixel point in the image coordinate system can be converted into a three-dimensional point (X, Y, Z) according to the following formula:
Z=z
where Z is the depth of the image.
In step S103, the point cloud subsets are projected along the height direction of the robot to obtain corresponding projection areas.
In order to facilitate collision detection between the obstacle of each height interval in the scene where the robot is located and the corresponding robot part, the projection area of the point cloud subset corresponding to each height interval can be obtained. The projection area is an area obtained by projecting the point cloud subset to a horizontal plane. Alternatively, the projection area may also be understood as an area obtained by projecting the point cloud subset in the height direction of the robot.
In step S104, collision detection is performed based on the projection area corresponding to the robot part in each height section and the projection area corresponding to the point cloud subset.
After the projection areas corresponding to the point cloud subsets corresponding to different height intervals (or called projection areas corresponding to obstacles) and the projection area of the robot part are respectively determined, collision detection can be performed on the robot in the height intervals according to information such as the distance and the shape of the projection areas of the two same height intervals in the same plane.
In one possible implementation manner, as shown in fig. 3, a circumscribed circle O1 and an inscribed circle O2 corresponding to the shape M may be determined according to the shape of the robot part of the robot in the ith height section, and whether the collision relationship between the robot and the obstacle P is non-collision may be determined according to the radius of the circumscribed circle and the shortest distance between the robot and the obstacle. And judging whether the collision relation between the robot and the obstacle P is constant or not according to the radius of the inscribed circle and the distance between the robot and the obstacle P.
For example, when the radius of the circumscribed circle O1 is r1, the radius of the inscribed circle O2 is r2, and r1 is greater than r 2. If the shortest distance between the robot and the obstacle P, or the shortest distance between the two projection areas, is greater than r1, the robot does not collide in the ith height interval. If the shortest distance between the two projection areas is greater than r2 and less than r1, the robot may collide with the obstacle in the ith height interval. If the shortest distance between the robot and the obstacle is less than r2, the robot will inevitably collide in the ith height interval.
In a possible implementation manner, the probability of collision may be determined according to the shortest distance between the robot part in the ith height section and the obstacle in the ith height section, and the circumscribed circle radius and the inscribed circle radius of the robot shape corresponding to the ith height section, and the cost value corresponding to the ith height section may be determined according to the collision probability.
For example, the value of the cost value may range from 0 to 255, and the higher the value, the higher the possibility of collision. In a possible implementation, part of the cost values may also be expressed in a specific sense, for example, the cost value 255 may be expressed as an unexplored position, 254 may be expressed as an obstacle, and other cost values may be expressed as collision possibility, for example, 253 may be expressed as a position which is closer to the obstacle than the inscribed circle radius of the robot and must collide, 1-252 may be expressed as a collision possibility, and 0 is expressed as a non-collision.
According to the same principle, cost values corresponding to all positions in the same height interval can be determined, and a cost map corresponding to the height interval can be obtained after a planar map of a scene where the robot is located is associated with the cost values. The cost value of each location in the cost map indicates the likelihood that the robot may collide or indicates the location as an obstacle or unexplored location.
When the whole robot is subjected to collision detection, the cost maps of all height intervals can be integrated to obtain a total cost map corresponding to the whole robot. And the cost value of any preset position in the total cost map is the maximum value of the cost values in the cost maps of different height intervals corresponding to the preset position. Therefore, the possibility that the robot is possibly collided can be acquired more accurately. And the collision detection is carried out relative to a two-dimensional plane, so that the collision possibility of each height interval can be effectively integrated, and a more accurate collision detection result can be obtained.
For example, in contrast to the two-dimensional plane collision detection, assuming that the robot has a very large first contour S1 in a certain height section, when an obstacle S2 that coincides with the projection area of the first contour S1 exists at another height, it is detected that the robot and the obstacle must collide. And through correspondingly comparing the obstacles in each height interval with the robot part, the collision detection between the obstacles in different height spaces and the robot part can be effectively avoided, so that the collision detection precision is favorably improved.
According to the robot collision detection method shown in fig. 1, the probability or possibility of collision of the robot at any point in the cost map or in any grid can be determined. In a further implementation manner, a robot path can be planned through the total cost map, so that the collision risk of the robot is reduced, and the requirement on the moving time of the robot is reduced.
As shown in fig. 4, in the implementation flowchart of determining the moving path of the robot through collision detection according to the embodiment of the present application, the implementation flowchart includes:
in step S401, the current position and the moving target position of the robot in the grid map, and the cost value of the grid in the cost map are acquired.
The current position of the robot can be determined by means of visual positioning or positioning by reference to a positioning base station. For example, the distance between the robot and the positioning base station may be determined according to the signal strength of the positioning base station signal transmitted by the positioning base station set in the scene received by the robot, and the position of the robot in the scene may be determined by three or more distances.
The target position is a node position corresponding to the task received by the robot, or may be a received position set by the user.
According to the cost value of the grid in the obtained total cost map, the probability or possibility that the robot is most likely to collide in each height space when the robot is located in the grid can be obtained.
The cost value of the mesh in the cost map can be determined according to the collision detection method of the robot shown in fig. 1.
In step S402, path cost values corresponding to a plurality of paths moving from the current position of the robot to the target position are acquired based on the cost values of the mesh.
A plurality of paths for moving from the current position to the target position may be predetermined on the basis of a short distance. And determining the position to be passed by in the moving process or the grid to be passed by according to the determined path. According to the positions or grids needed to be passed by the mobile terminal to move to the target position, cost values corresponding to the positions or grids can be calculated.
For the position corresponding to the path, the position corresponding to the path may be determined at predetermined distance intervals.
The cost values of the positions or the grids included in the path can be summed and calculated to obtain the path cost value corresponding to the path. I.e. the likelihood that the path may collide. Alternatively, the maximum cost value of the position or the grid corresponding to the path in the total cost map may be selected as the path cost value.
In step S403, the movement path of the robot is determined based on the distance of the path and the path cost value.
After the distance corresponding to the path and the path cost value are determined, the moving path of the robot can be determined by adopting a searching mode of distance priority or risk priority reduction.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 5 is a schematic structural diagram of a collision detection apparatus of a robot according to an embodiment of the present application, where the collision detection apparatus of the robot includes:
a height interval dividing unit 501, configured to divide the robot into a plurality of height intervals, and obtain a projection area obtained by projecting a robot part corresponding to each height interval;
a point cloud dividing unit 502, configured to obtain a point cloud set of a scene where the robot is located, and obtain a point cloud subset generated by dividing the point cloud set according to the divided height interval;
a point cloud projection unit 503, configured to project the point cloud subset along the height direction of the robot to obtain a corresponding projection area;
and a collision detection unit 504 for performing collision detection according to the projection area corresponding to the robot part in each height section and the projection area corresponding to the point cloud subset.
The robot collision detection device shown in fig. 5 corresponds to the robot collision detection method shown in fig. 1.
Fig. 6 is a schematic diagram of a robot provided in an embodiment of the present application. As shown in fig. 6, the robot 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a collision detection program for a robot, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various robot collision detection method embodiments described above. Alternatively, the processor 60 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 62.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the robot 6.
The robot may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a robot 6, and does not constitute a limitation of the robot 6, and may include more or fewer parts than shown, or some parts in combination, or different parts, for example, the robot may also include input and output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the robot 6, such as a hard disk or a memory of the robot 6. The memory 61 may also be an external storage device of the robot 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the robot 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the robot 6. The memory 61 is used for storing the computer program and other programs and data required by the robot. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (9)
1. A method of collision detection for a robot, the method comprising:
dividing the robot into a plurality of height intervals, and acquiring a projection area obtained by projecting a robot part corresponding to the height intervals;
acquiring a point cloud set of a scene where the robot is located, and obtaining a point cloud subset generated by point cloud set division according to the divided height intervals;
projecting the point cloud subset along the height direction of the robot to obtain a corresponding projection area;
performing collision detection according to the projection area corresponding to the robot part of each height interval and the projection area corresponding to the point cloud subset;
according to the projection area corresponding to the robot part of each height interval and the projection area corresponding to the point cloud subset, collision detection is carried out, and the collision detection method comprises the following steps:
generating cost maps corresponding to different height intervals according to the projection area corresponding to the robot part and the projection area corresponding to the point cloud subset, wherein the cost maps are used for representing the possibility that different positions in the maps collide in the height intervals corresponding to the cost maps;
and performing collision detection according to the cost maps of the different height sections.
2. The method of claim 1, wherein performing collision detection based on cost maps of different altitude intervals comprises:
acquiring a cost value of a preset position in a corresponding cost map;
selecting the highest cost value of a preset position in a corresponding cost map as the cost value of the preset position, wherein the higher the cost value of a certain position in the cost map is, the higher the probability of collision at the certain position is;
and determining the collision possibility of the preset position according to the cost value corresponding to the preset position.
3. The method of claim 1, wherein dividing the robot into a plurality of height intervals comprises:
and dividing the robot into a plurality of height sections according to the change information of the shape of the robot in the height direction.
4. The method of claim 1, wherein obtaining a cloud set of points of a scene in which the robot is located comprises:
acquiring a depth image of a scene where the robot is located;
and converting the depth image of the scene where the robot is located into a point cloud set of the scene where the robot is located according to a preset conversion rule.
5. The method of claim 1, wherein obtaining a point cloud set of a scene in which the robot is located comprises:
acquiring a color image and a depth image of a scene where the robot is located;
and determining a point cloud set of the scene where the robot is located according to the depth image and the color image by combining the robot camera parameters of the acquired images.
6. A path planning method is characterized by comprising the following steps:
acquiring the current position and the moving target position of the robot in a grid map and the cost value of a grid in the cost map acquired according to claim 2;
according to the cost value of the grid, obtaining path cost values corresponding to a plurality of paths moving from the current position of the robot to the target position;
and determining the moving path of the robot according to the distance of the path and the path cost value.
7. The method of claim 6, wherein obtaining path cost values corresponding to a plurality of paths moving from the current position to the target position of the robot according to the cost values of the grid comprises:
determining a grid set corresponding to a plurality of paths of the robot moving from the current position to the target position respectively;
obtaining a cost value of each grid in the grid set;
and determining the path cost value of the path corresponding to the grid set according to the sum of the cost values of the grids in the grid set.
8. A collision detecting apparatus of a robot, characterized in that the apparatus comprises:
the height interval dividing unit is used for dividing the robot into a plurality of height intervals and acquiring a projection area obtained by projecting a robot part corresponding to the height intervals;
the point cloud dividing unit is used for acquiring a point cloud set of a scene where the robot is located and obtaining a point cloud subset generated by dividing the point cloud set according to the divided height interval;
the point cloud projection unit is used for projecting the point cloud subset along the height direction of the robot to obtain a corresponding projection area;
the collision detection unit is used for performing collision detection according to the projection area corresponding to the robot part of each height section and the projection area corresponding to the point cloud subset;
the collision detection unit includes:
the cost map generation subunit is used for generating cost maps corresponding to different height intervals according to the projection area corresponding to the robot part and the projection area corresponding to the point cloud subset, wherein the cost maps are used for representing the possibility that different positions in the map collide in the height intervals corresponding to the cost maps;
and the detection subunit is used for carrying out collision detection according to the cost maps of the different height intervals.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of the claims 1 to 5 are implemented when the computer program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010752413.0A CN112060079B (en) | 2020-07-30 | 2020-07-30 | Robot and collision detection method and device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010752413.0A CN112060079B (en) | 2020-07-30 | 2020-07-30 | Robot and collision detection method and device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112060079A CN112060079A (en) | 2020-12-11 |
CN112060079B true CN112060079B (en) | 2022-02-22 |
Family
ID=73657476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010752413.0A Active CN112060079B (en) | 2020-07-30 | 2020-07-30 | Robot and collision detection method and device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112060079B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112597946A (en) * | 2020-12-29 | 2021-04-02 | 广州极飞科技有限公司 | Obstacle representation method and device, electronic equipment and readable storage medium |
TWI741943B (en) * | 2021-02-03 | 2021-10-01 | 國立陽明交通大學 | Robot controlling method, motion computing device and robot system |
CN113588195B (en) * | 2021-08-10 | 2022-07-26 | 同济大学 | Collision blockage detection method and device |
CN114419075B (en) * | 2022-03-28 | 2022-06-24 | 天津云圣智能科技有限责任公司 | Point cloud cutting method and device and terminal equipment |
CN116424315B (en) * | 2023-03-31 | 2024-10-15 | 阿波罗智联(北京)科技有限公司 | Collision detection method, collision detection device, electronic equipment, automatic driving vehicle and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009052937A (en) * | 2007-08-24 | 2009-03-12 | Hiroshima Univ | Obstacle detecting method and apparatus |
CN106997721A (en) * | 2017-04-17 | 2017-08-01 | 深圳奥比中光科技有限公司 | Draw method, device and the storage device of 2D maps |
CN107229903A (en) * | 2017-04-17 | 2017-10-03 | 深圳奥比中光科技有限公司 | Method, device and the storage device of robot obstacle-avoiding |
CN111290393A (en) * | 2020-03-04 | 2020-06-16 | 上海高仙自动化科技发展有限公司 | Driving control method and device, intelligent robot and computer readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4311391B2 (en) * | 2005-10-03 | 2009-08-12 | ソニー株式会社 | Contact shape calculation device, contact shape calculation method, and computer program |
-
2020
- 2020-07-30 CN CN202010752413.0A patent/CN112060079B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009052937A (en) * | 2007-08-24 | 2009-03-12 | Hiroshima Univ | Obstacle detecting method and apparatus |
CN106997721A (en) * | 2017-04-17 | 2017-08-01 | 深圳奥比中光科技有限公司 | Draw method, device and the storage device of 2D maps |
CN107229903A (en) * | 2017-04-17 | 2017-10-03 | 深圳奥比中光科技有限公司 | Method, device and the storage device of robot obstacle-avoiding |
CN111290393A (en) * | 2020-03-04 | 2020-06-16 | 上海高仙自动化科技发展有限公司 | Driving control method and device, intelligent robot and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112060079A (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112060079B (en) | Robot and collision detection method and device thereof | |
US11204249B2 (en) | Positioning method and robot with the same | |
CN110936383B (en) | Obstacle avoiding method, medium, terminal and device for robot | |
CN111026131B (en) | Expansion region determining method and device, robot and storage medium | |
US20210358153A1 (en) | Detection methods, detection apparatuses, electronic devices and storage media | |
CN110850859B (en) | Robot and obstacle avoidance method and obstacle avoidance system thereof | |
US11422567B2 (en) | Robot recharging localization method and robot using the same | |
CN107742304A (en) | Method and device for determining movement track, mobile robot and storage medium | |
CN111381586A (en) | Robot and movement control method and device thereof | |
CN111380532B (en) | Path planning method, device, terminal and computer storage medium | |
CN113405557B (en) | Path planning method and related device, electronic equipment and storage medium | |
CN110000793B (en) | Robot motion control method and device, storage medium and robot | |
CN111380510A (en) | Repositioning method and device and robot | |
CN112926395A (en) | Target detection method and device, computer equipment and storage medium | |
JP2017526083A (en) | Positioning and mapping apparatus and method | |
CN115993830B (en) | Path planning method and device based on obstacle avoidance and robot | |
CN111142514A (en) | Robot and obstacle avoidance method and device thereof | |
CN112907746A (en) | Method and device for generating electronic map, electronic equipment and storage medium | |
CN110276801B (en) | Object positioning method and device and storage medium | |
CN116415652A (en) | Data generation method and device, readable storage medium and terminal equipment | |
CN112381873A (en) | Data labeling method and device | |
CN112415524A (en) | Robot and positioning navigation method and device thereof | |
CN114459486B (en) | Robot and channel navigation method and device thereof and storage medium | |
CN116358528A (en) | Map updating method, map updating device, self-mobile device and storage medium | |
CN111854751A (en) | Navigation target position determining method and device, readable storage medium and robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |