CN112508933A - Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning - Google Patents
Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning Download PDFInfo
- Publication number
- CN112508933A CN112508933A CN202011519003.8A CN202011519003A CN112508933A CN 112508933 A CN112508933 A CN 112508933A CN 202011519003 A CN202011519003 A CN 202011519003A CN 112508933 A CN112508933 A CN 112508933A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- mechanical arm
- space
- flexible mechanical
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005259 measurement Methods 0.000 claims abstract description 17
- 230000004888 barrier function Effects 0.000 claims abstract description 8
- 238000003708 edge detection Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 19
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000000007 visual effect Effects 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 5
- 238000006073 displacement reaction Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning, which relates to the technical field of flexible mechanical arm obstacle avoidance and comprises the following steps: acquiring target space image characteristic information in advance and using the target space image characteristic information as input of an RCF (radar cross section) depth network; the RCF depth network carries out edge detection on all objects in the space image to obtain position information of obstacles in the space; carrying out depth measurement on the obtained barrier area by using single-line laser; matching and fusing the plane position information of the original barrier and the obtained depth map, and constructing a target three-dimensional point cloud model; and marking the obstacle, and taking the obstacle as a mechanical arm to move and avoid the obstacle. The invention realizes the long-time and high-precision work in the non-structural complex space, provides path planning guidance, is suitable for satellite assembly and other target spaces, has the characteristics of complexity and non-structural property, meets the automatic assembly task with high task precision, and has the advantages of high guidance efficiency, accurate path planning and wide application range.
Description
Technical Field
The invention relates to the technical field of obstacle avoidance of flexible mechanical arms, in particular to a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning.
Background
The installation of the individual devices in the moonlet is an important part of its final assembly process. Because the internal structure of the small satellite is complex, the wiring space is narrow, the number of parts is large, the main equipment generally weighs more than ten kilograms, and the assembly mode is mainly manual operation at present. The manual assembly requires that an assembler lifts a heavy object and simultaneously keeps the relative positioning precision of the equipment and the satellite body, great difficulty is brought to the operation of the assembler, the assembly efficiency of the satellite final assembly is also influenced, and the satellite can be damaged or the operator can be injured in serious cases. Meanwhile, for small satellites, the internal structure of each model satellite is different, and wiring is complex, so that the consistency of operation objects is difficult to ensure. In order to improve the general assembly efficiency of the small satellites, it is necessary to develop an intelligent assembly system of the small satellites, and the automatic assembly of the single-machine equipment which can adapt to the small satellites of different models is realized by taking an industrial robot as a platform, through the offline programming of the robot based on a digital model of the small satellites and combining the real-time feedback of machine vision, and the collision between components and the satellites is ensured not to occur.
The assembly of parts of the satellite is carried out in a narrow space, the problems of limited view, blocked target position and the like exist in the assembly process, and the assembly space has the characteristics of complexity and non-structure due to different tasks. Therefore, the primary task of the intelligent assembly system for the small satellite is to accurately measure the uncertain environment of the satellite assembly process, and mainly to identify and detect obstacles. Because the fixed equipment and the body are presented in the spatial database before and after installation, and the robot hand and the grabbed equipment can complete obstacle avoidance calculation through modeling, the cable which is possibly obstructed is the cable fixed in the space.
At present, the mechanical arm obstacle avoidance method mainly achieves detection and positioning of obstacles based on visual information, and mechanical arm path planning is completed on the basis. However, due to factors such as the visual angle of the camera and the shielding of the mechanical arm, the positioning accuracy of the obstacle is low, the error is large, and the risk of collision is greatly increased, especially under the condition that the structure of the target space has uncertainty and complexity.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning, so as to overcome the technical problems in the prior related art.
The technical scheme of the invention is realized as follows:
a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning comprises the following steps:
step S1, collecting target space image characteristic information in advance and using the information as the input of the RCF depth network;
step S2, carrying out edge detection on all objects in the space image by an RCF depth network (Richer connected Features), and acquiring position information of obstacles in the space;
step S3, carrying out depth measurement on the obtained obstacle area by single line laser, wherein the depth measurement comprises measuring the target depth and obtaining a multi-frame fused depth map;
step S4, matching and fusing the plane position information of the original obstacle and the obtained depth map, and constructing a target three-dimensional point cloud model;
and step S5, marking the obstacle, and taking the obstacle as the mechanical arm movement obstacle avoidance.
Further, the acquiring of the characteristic information of the image in the target space includes acquiring the image in the target space by the vision sensor.
Furthermore, the single-line laser carries out depth measurement on the obtained obstacle area, and comprises the step that a single-line laser sensor actively scans the obstacle area in the space to obtain the depth information of the obstacle.
Furthermore, the single-line laser sensor is fixed at the tail end of the flexible mechanical arm and used for translating or rotating along with the flexible mechanical arm to measure the depth of a target area.
Further, the target three-dimensional point cloud model further comprises a step of registering the acquired distance information of the obstacles by using relative position parameters of the visual sensor and the laser sensor and fusing the registered distance information with position information to obtain the three-dimensional point cloud model of the obstacles in the target space.
Further, the information fusion comprises the following steps:
pre-selecting three pairs of freedom degree matching points of the flexible mechanical arm;
and calibrating a transformation matrix, carrying out coordinate system transformation on the three-dimensional point cloud model, and fusing the three-dimensional point cloud model with the original model in a superposition mode.
Further, the method comprises three-dimensional point cloud model alignment, wherein the alignment of the three-dimensional point cloud model is matrix transformation and is represented as follows:
setting a transformation matrix Rx,Ry,RzAnd, T, wherein:
T=[Tx,Ty,Tz]
wherein alpha, beta, phi are rotation angles of the sensor around x, y, z axes, Tx,Ty,TzIs the displacement occurring at the sensor;
the transformation formula is expressed as:
and determining the transformation matrix and the coordinates (x ', y ', z ') of the points in the current coordinate system, determining the coordinates (x, y, z) of the points in the original coordinate system, and finishing the alignment of the three-dimensional point cloud model.
The invention has the beneficial effects that:
the flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning is characterized in that target space image characteristic information is collected in advance and is used as input of an RCF depth network; the RCF depth network carries out edge detection on all objects in the space image to obtain position information of obstacles in the space; carrying out depth measurement on the obtained barrier area by using single-line laser; the plane position information of an original obstacle is matched and fused with the obtained depth map, a target three-dimensional point cloud model is constructed, the obstacle is marked and serves as a mechanical arm to move and avoid an obstacle, path planning guidance is provided in long-time and high-precision work in a non-structural complex space, the method is suitable for satellite assembly and other target spaces, has the characteristics of complexity and non-structural performance, meets the automatic assembly task with high task precision, and is high in guidance efficiency, accurate in path planning and wide in application range.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention;
fig. 2 is a schematic flow judgment diagram of a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention;
FIG. 3 is a schematic assembly space diagram of a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a segmented cable region of a flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention;
FIG. 5 is a cable depth diagram of a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention;
fig. 6 is a schematic diagram of three-dimensional coordinate information of a cable of a flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
According to the embodiment of the invention, a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning is provided.
As shown in fig. 1-2, a flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning according to an embodiment of the present invention includes the following steps:
step S1, collecting target space image characteristic information in advance and using the information as the input of the RCF depth network;
step S2, the RCF depth network carries out edge detection on all objects in the space image to obtain the position information of the obstacles in the space;
step S3, carrying out depth measurement on the obtained obstacle area by single line laser, wherein the depth measurement comprises measuring the target depth and obtaining a multi-frame fused depth map;
step S4, matching and fusing the plane position information of the original obstacle and the obtained depth map, and constructing a target three-dimensional point cloud model;
and step S5, marking the obstacle, and taking the obstacle as the mechanical arm movement obstacle avoidance.
The acquiring of the characteristic information of the target space image comprises the step of acquiring an image in the target space by a vision sensor.
The single-line laser is used for measuring the depth of the obtained obstacle area, and the single-line laser sensor is used for actively scanning the obstacle area in the space to obtain the depth information of the obstacle.
The single-line laser sensor is fixed at the tail end of the flexible mechanical arm and used for translating or rotating along with the flexible mechanical arm to measure the depth of a target area.
The target three-dimensional point cloud model further comprises a step of registering the acquired distance information of the obstacles by using relative position parameters of the visual sensor and the laser sensor and fusing the registered distance information with the position information to obtain the three-dimensional point cloud model of the obstacles in the target space.
The information fusion comprises the following steps:
pre-selecting three pairs of freedom degree matching points of the flexible mechanical arm;
and calibrating a transformation matrix, carrying out coordinate system transformation on the three-dimensional point cloud model, and fusing the three-dimensional point cloud model with the original model in a superposition mode.
The method also comprises three-dimensional point cloud model alignment, wherein the alignment of the three-dimensional point cloud models is matrix transformation and is represented as follows:
setting a transformation matrix Rx,Ry,RzAnd, T, wherein:
T=[Tx,Ty,Tz]
wherein alpha, beta, phi are rotation angles of the sensor around x, y, z axes, Tx,Ty,TzIs the displacement occurring at the sensor;
the transformation formula is expressed as:
and determining the transformation matrix and the coordinates (x ', y ', z ') of the points in the current coordinate system, determining the coordinates (x, y, z) of the points in the original coordinate system, and finishing the alignment of the three-dimensional point cloud model.
By means of the technical scheme, the feature information of the target space image is collected in advance and is used as the input of the RCF depth network; the RCF depth network carries out edge detection on all objects in the space image to obtain position information of obstacles in the space; carrying out depth measurement on the obtained barrier area by using single-line laser; the plane position information of an original obstacle is matched and fused with the obtained depth map, a target three-dimensional point cloud model is constructed, the obstacle is marked and serves as a mechanical arm to move and avoid an obstacle, path planning guidance is provided in long-time and high-precision work in a non-structural complex space, the method is suitable for satellite assembly and other target spaces, has the characteristics of complexity and non-structural performance, meets the automatic assembly task with high task precision, and is high in guidance efficiency, accurate in path planning and wide in application range.
In addition, the RCF depth network takes a target space image acquired by a vision sensor as input, wherein the convolution of the network is divided into five stages, and two adjacent stages are subjected to down-sampling through a pooling layer, so that the features with different scales are obtained. Finally, the features of each stage are fused through cascade connection and convolution of 1x1, and obstacle edges are output. Compared with other methods, the obstacle detection and edge segmentation based on the RCF deep network are more accurate.
And the laser ranging method adopts a single line laser sensor. After the position information of the obstacle is preliminarily obtained, the laser sensor scans the area of the obstacle to obtain a depth map of the obstacle. And further registering the barrier depth map according to the relative position parameters of the vision sensor and the laser sensor to complete the three-dimensional reconstruction of the barrier in the space.
In addition, specifically, as shown in fig. 3 to 6, in an embodiment, the robot arm avoids the obstacle in the satellite assembly space, and the task of the embodiment is to acquire accurate three-dimensional coordinates of the cable in the assembly space, to realize reconstruction of a cable model, and to further guide the robot arm to avoid the obstacle.
The assembly space in the embodiment is shown in fig. 3, and is composed of an assembly panel 4, an obstacle 5 (multi-cable), a laser sensor 1, a vision sensor 2, and a robot arm. The two-dimensional plane where the tail end of the mechanical arm and the sensor are located is an operation surface 3 defined as an xOy plane, and the vision sensor and the laser sensor are both arranged at the tail end of the mechanical arm and can rotate and move along with the mechanical arm.
After the robot arm position is initialized, the vision sensor is facing the mounting panel, as shown in FIG. 3. In the embodiment, firstly, the color image information of the assembly space is acquired through the vision sensor and is used as the input of the RCF depth network after being preprocessed. And extracting the cable edge contour information in the images under different scales respectively in five stages of the RCF depth network, and finally outputting the cable edge information through cascading and fusion. And obtaining the two-dimensional coordinate information of the complete cable through post-processing. In this embodiment, the divided cable region is shown in fig. 4.
The single line laser sensor in this embodiment is mounted directly above the vision sensor. According to the preliminarily detected cable position, the laser sensor is guided to face the cable area, the depth information of the cable is measured, and a cable depth map is obtained, as shown in fig. 5. The final depth measurement result is obtained by adopting a mode of fusion of multi-frame depth maps, noise generated by measurement errors in the depth maps can be effectively removed, and meanwhile, the cable depth measurement precision is improved. The movement of the single-line laser sensor is completed by means of a moving device at the tail end of the mechanical arm, and the depth of the interested target area can be actively measured.
The depth map of the cable is registered (matched and fused with the color map) by using the relative position information of the visual sensor and the laser sensor, and a three-dimensional surface point cloud model of the cable in the assembly space is accurately reconstructed, wherein the three-dimensional surface point cloud model comprises three-dimensional coordinate information of the cable, and is shown in fig. 6.
In addition, after the three-dimensional reconstruction of the cable in the target space is completed, the planning path of the mechanical arm is completed based on the cable information and the target position information, and the mechanical arm is guided to avoid the obstacle. Because the depth measurement mode based on the single-line laser sensor is influenced by the position of the sensor, the shielding condition possibly exists between cables at a single visual angle, and the reconstructed cable model is incomplete. In order to improve the modeling precision and integrity, in the motion process of the mechanical arm, the vision sensor and the laser sensor continue to measure the target space, and the point cloud model of the cable is reconstructed at intervals. And aligning the mechanical arm to the original model coordinate system by calculating a transformation matrix according to the rotation, translation and other position parameters of the mechanical arm, and supplementing and correcting the original model. The whole process is repeated at intervals until the mechanical arm reaches the range of the target area.
In the scheme, path planning guidance is provided in long-time and high-precision work in a non-structural complex space, an automatic assembly task with high task precision is met, guidance efficiency is high, and path planning is accurate.
In summary, by means of the above technical solution of the present invention, the feature information of the target space image is collected in advance and is used as the input of the RCF depth network; the RCF depth network carries out edge detection on all objects in the space image to obtain position information of obstacles in the space; carrying out depth measurement on the obtained barrier area by using single-line laser; the plane position information of an original obstacle is matched and fused with the obtained depth map, a target three-dimensional point cloud model is constructed, the obstacle is marked and serves as a mechanical arm to move and avoid an obstacle, path planning guidance is provided in long-time and high-precision work in a non-structural complex space, the method is suitable for satellite assembly and other target spaces, has the characteristics of complexity and non-structural performance, meets the automatic assembly task with high task precision, and is high in guidance efficiency, accurate in path planning and wide in application range.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (7)
1. A flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning is characterized by comprising the following steps:
acquiring target space image characteristic information in advance and using the target space image characteristic information as input of an RCF (radar cross section) depth network;
the RCF depth network carries out edge detection on all objects in the space image to obtain position information of obstacles in the space;
the single-line laser carries out depth measurement on the obtained obstacle region, wherein the depth measurement comprises the steps of measuring the target depth and obtaining a multi-frame fused depth map;
matching and fusing the plane position information of the original barrier and the obtained depth map, and constructing a target three-dimensional point cloud model;
and marking the obstacle, and taking the obstacle as a mechanical arm to move and avoid the obstacle.
2. The flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning as claimed in claim 1, wherein the acquiring of target space image feature information includes a vision sensor acquiring an image under a target space.
3. The flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning according to claim 2, wherein the single line laser is used for depth measurement of the obtained obstacle region, and comprises a single line laser sensor which is used for scanning the obstacle region in the space actively to obtain obstacle depth information.
4. The flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning as claimed in claim 3, wherein the single line laser sensor is fixed at the tail end of the flexible mechanical arm and is used for performing translation or rotation with the flexible mechanical arm to perform depth measurement on a target area.
5. The flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning according to claim 4, wherein the target three-dimensional point cloud model further comprises a step of registering the acquired obstacle distance information by using relative position parameters of the visual sensor and the laser sensor and fusing the registered obstacle distance information with the position information to obtain the three-dimensional point cloud model of the obstacle in the target space.
6. The flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning as claimed in claim 5, wherein the information fusion comprises the following steps:
pre-selecting three pairs of freedom degree matching points of the flexible mechanical arm;
and calibrating a transformation matrix, carrying out coordinate system transformation on the three-dimensional point cloud model, and fusing the three-dimensional point cloud model with the original model in a superposition mode.
7. The flexible mechanical arm motion obstacle avoidance method based on complex space obstacle positioning as claimed in claim 6, further comprising three-dimensional point cloud model alignment, wherein the alignment of the three-dimensional point cloud models is matrix transformation, and is represented as:
setting a transformation matrix Rx,Ry,RzAnd, T, wherein:
T=[Tx,Ty,Tz]
wherein alpha, beta, phi are rotation angles of the sensor around x, y, z axes, Tx,Ty,TzIs the displacement occurring at the sensor;
the transformation formula is expressed as:
and determining the transformation matrix and the coordinates (x ', y ', z ') of the points in the current coordinate system, determining the coordinates (x, y, z) of the points in the original coordinate system, and finishing the alignment of the three-dimensional point cloud model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011519003.8A CN112508933A (en) | 2020-12-21 | 2020-12-21 | Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011519003.8A CN112508933A (en) | 2020-12-21 | 2020-12-21 | Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112508933A true CN112508933A (en) | 2021-03-16 |
Family
ID=74922877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011519003.8A Pending CN112508933A (en) | 2020-12-21 | 2020-12-21 | Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112508933A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113074748A (en) * | 2021-03-29 | 2021-07-06 | 北京三快在线科技有限公司 | Path planning method and device for unmanned equipment |
CN114654471A (en) * | 2022-04-29 | 2022-06-24 | 中国煤炭科工集团太原研究院有限公司 | Anchor protection mechanical arm obstacle avoidance system and method based on laser scanner |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180061949A (en) * | 2016-11-30 | 2018-06-08 | 주식회사 유진로봇 | Obstacle Sensing Apparatus and Method for Multi-Channels Based Mobile Robot, Mobile Robot including the same |
CN108858199A (en) * | 2018-07-27 | 2018-11-23 | 中国科学院自动化研究所 | The method of the service robot grasp target object of view-based access control model |
CN109048926A (en) * | 2018-10-24 | 2018-12-21 | 河北工业大学 | A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision |
CN110587600A (en) * | 2019-08-20 | 2019-12-20 | 南京理工大学 | Point cloud-based autonomous path planning method for live working robot |
CN111958590A (en) * | 2020-07-20 | 2020-11-20 | 佛山科学技术学院 | Mechanical arm anti-collision method and system in complex three-dimensional environment |
CN111993425A (en) * | 2020-08-25 | 2020-11-27 | 深圳市优必选科技股份有限公司 | Obstacle avoidance method, device, mechanical arm and storage medium |
-
2020
- 2020-12-21 CN CN202011519003.8A patent/CN112508933A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180061949A (en) * | 2016-11-30 | 2018-06-08 | 주식회사 유진로봇 | Obstacle Sensing Apparatus and Method for Multi-Channels Based Mobile Robot, Mobile Robot including the same |
CN108858199A (en) * | 2018-07-27 | 2018-11-23 | 中国科学院自动化研究所 | The method of the service robot grasp target object of view-based access control model |
CN109048926A (en) * | 2018-10-24 | 2018-12-21 | 河北工业大学 | A kind of intelligent robot obstacle avoidance system and method based on stereoscopic vision |
CN110587600A (en) * | 2019-08-20 | 2019-12-20 | 南京理工大学 | Point cloud-based autonomous path planning method for live working robot |
CN111958590A (en) * | 2020-07-20 | 2020-11-20 | 佛山科学技术学院 | Mechanical arm anti-collision method and system in complex three-dimensional environment |
CN111993425A (en) * | 2020-08-25 | 2020-11-27 | 深圳市优必选科技股份有限公司 | Obstacle avoidance method, device, mechanical arm and storage medium |
Non-Patent Citations (2)
Title |
---|
丁勤 等: "基于多传感的不确定环境测量系统", 《系统仿真技术》, vol. 16, no. 1, 28 February 2020 (2020-02-28), pages 46 - 50 * |
吴向东: "可移动障碍物环境下的机械臂动态避障规划研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 05, 15 May 2019 (2019-05-15), pages 1 - 88 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113074748A (en) * | 2021-03-29 | 2021-07-06 | 北京三快在线科技有限公司 | Path planning method and device for unmanned equipment |
CN113074748B (en) * | 2021-03-29 | 2022-08-26 | 北京三快在线科技有限公司 | Path planning method and device for unmanned equipment |
CN114654471A (en) * | 2022-04-29 | 2022-06-24 | 中国煤炭科工集团太原研究院有限公司 | Anchor protection mechanical arm obstacle avoidance system and method based on laser scanner |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110370286B (en) | Method for identifying rigid body space position of dead axle motion based on industrial robot and monocular camera | |
CN109270534B (en) | Intelligent vehicle laser sensor and camera online calibration method | |
CN109598765B (en) | Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object | |
US20030048459A1 (en) | Measurement system and method | |
EP3896640B1 (en) | System and method for efficient 3d reconstruction of objects with telecentric line-scan cameras | |
CN112508933A (en) | Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning | |
CN112648934B (en) | Automatic elbow geometric form detection method | |
US20140327746A1 (en) | Volume reconstruction of an object using a 3d sensor and robotic coordinates | |
EP3799790B1 (en) | Method for tracking location of two-dimensional non-destructive inspection scanner on target object using scanned structural features | |
CN110703230A (en) | Position calibration method between laser radar and camera | |
Kim et al. | Autonomous mobile robot localization and mapping for unknown construction environments | |
CN112446844B (en) | Point cloud feature extraction and registration fusion method | |
CN111707189A (en) | Laser displacement sensor light beam direction calibration method based on binocular vision | |
CN112045655B (en) | Mobile robot pose measurement method and system for large-scale multi-site scene | |
Olson et al. | Wide-baseline stereo vision for Mars rovers | |
Grudziński et al. | Stereovision tracking system for monitoring loader crane tip position | |
CN113034584B (en) | Mobile robot visual positioning method based on object semantic road sign | |
CN113421310A (en) | Method for realizing cross-field high-precision measurement based on motion position error compensation technology of grating ruler positioning | |
CN116560062B (en) | Microscope focusing anti-collision control method | |
Jaw et al. | Feature-based registration of terrestrial lidar point clouds | |
WO2023142608A1 (en) | System and method for obtaining aircraft profile | |
CN116372938A (en) | Surface sampling mechanical arm fine adjustment method and device based on binocular stereoscopic vision three-dimensional reconstruction | |
CN112729252B (en) | Tunnel laser point cloud collection method based on robot platform and robot system | |
De Miguel et al. | High-accuracy patternless calibration of multiple 3d lidars for autonomous vehicles | |
CN113324538A (en) | Cooperative target remote high-precision six-degree-of-freedom pose measurement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |