CN114643598A - Mechanical arm tail end position estimation method based on multi-information fusion - Google Patents
Mechanical arm tail end position estimation method based on multi-information fusion Download PDFInfo
- Publication number
- CN114643598A CN114643598A CN202210517157.6A CN202210517157A CN114643598A CN 114643598 A CN114643598 A CN 114643598A CN 202210517157 A CN202210517157 A CN 202210517157A CN 114643598 A CN114643598 A CN 114643598A
- Authority
- CN
- China
- Prior art keywords
- mechanical arm
- tail end
- visual sensor
- target
- sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/0095—Means or methods for testing manipulators
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Abstract
The invention provides a mechanical arm tail end position estimation method based on multi-information fusion, and belongs to the technical field of mechanical arm pose measurement. The method comprises the following steps: determining a target; installing a visual sensor at the tail end of the mechanical arm, paving a target, and detecting the pose of the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm; and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain fused mechanical arm tail end position information. By adopting the method and the device, the estimation precision of the tail end position of the mechanical arm can be improved.
Description
Technical Field
The invention relates to the technical field of pose measurement of mechanical arms, in particular to a mechanical arm tail end position estimation method based on multi-information fusion.
Background
The mechanical arm assists people to solve the work of difficulty, dirtiness, dryness and danger, and is widely applied to the development fields of public service, industrial manufacturing, national safety and unknown fields. In recent years, methods for achieving position estimation by assisting a robot arm with an external sensor, such as stereoscopic vision, depth vision, an inertial sensor, a scanner, and a laser tracker, have come to work. With the rapid development of electronic technology and image processing technology, vision sensors have become new mastery force in the field of mechanical arm position measurement. However, the position model established based on vision has large calculation amount, the multi-information fusion has the problems of spatial domain and time domain inconsistency, complex fusion algorithm and the like, and the terminal position cannot be accurately estimated, so that great trouble is brought to the life of people.
In summary, an accurate method for fusing multiple detection means is urgently needed for estimating the tail end position of the mechanical arm at present, so that the tail end position of the mechanical arm can be measured with high precision.
Disclosure of Invention
The embodiment of the invention provides a mechanical arm tail end position estimation method based on multi-information fusion, which can improve the mechanical arm tail end position estimation precision. The technical scheme is as follows:
the embodiment of the invention provides a mechanical arm tail end position estimation method based on multi-information fusion, which comprises the following steps:
determining a target;
installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm;
obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm;
and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information.
Further, the target is a cubic target formed by three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different sequences; the three colors of 6 surfaces of the same target are arranged in the same sequence;
three colors of each surface of the target form squares with side lengths of 1.4mm, 2.4mm and 3.0mm, and the central points of the 3 squares are the same;
the target has a through hole with a central via.
Further, the installing a visual sensor at the tail end of the mechanical arm, laying a target, and performing pose detection on the laid target through the visual sensor to calibrate the spatial position relationship between the visual sensor and the tail end of the mechanical arm includes:
installing a visual sensor at the tail end of the mechanical arm to ensure that the direction of an optical axis is consistent with that of the extension rod and fixedly connected;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
by adjusting the pose states of the visual sensor and the mechanical arm, after four annular targets are projected to the central target and have equal pixel numbers on an image, the spatial relative positions of the visual sensor and the tail end of the mechanical arm are recorded by measurement。
Further, obtaining the end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the end of the mechanical arm includes:
the method comprises the steps of uniformly paving 6 targets in a motion area of the mechanical arm, obtaining the positions of the targets in an image in a vertical imaging mode, and obtaining the tail end position of the mechanical arm estimated based on the visual sensor according to the positions of the targets in the image and the spatial position relation between a calibrated visual sensor and the tail end of the mechanical arm and by combining the parameters of the visual sensor and the ground resolution.
Further, the resulting vision sensor based estimated end-of-arm position is represented as:
wherein the content of the first and second substances,andboth indicate the robot arm tip position based on vision sensor estimation, indicate ground resolution,coordinates of a center point of an image representing the vision sensor,the coordinates of the center point of the target are represented,the position of the spatial coordinates of the target is represented,which represents the focal length of the lens,the size of the picture element is represented,representing the spatial relative position of the vision sensor and the end of the robotic arm.
Further, the ground resolution is expressed as:
wherein the content of the first and second substances,andrespectively representing the number of pixels corresponding to three line segments from long to short in the target.
Further, under the condition of ensuring the consistency of image data acquired by the visual sensor and mechanical arm feedback data in a space domain and a time domain, performing data fusion by using multi-rate kalman filtering, and obtaining the fused mechanical arm end position information includes:
in the aspect of time domain, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data, the rising edge of a triggering signal and the time interval of an image frame synchronization signal are acquired(ii) a Meanwhile, the controller of the mechanical arm is controlled through external triggering, the rising edge of the triggering signal and the time interval of the initial signal of the data fed back by the mechanical armThere is a time difference between the vision sensor and the robot armAfter triggering the vision sensor, delayingTriggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured;
and fusing the data measured by the vision sensor and the sensor of the mechanical arm by adopting a multi-rate Kalman filtering mode to obtain fused tail end position information of the mechanical arm.
Further, the fused mechanical arm end position information is obtained and expressed as:
wherein the subscriptIs shown asImaging by a secondary vision sensor;indicating the second in the imaging period of the vision sensorCollecting the positions of the mechanical arms;representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;representing the fused tail end position information of the mechanical arm;representing the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time;representing the three-axis tail end position information estimated by the mechanical arm;representing a robot arm tip position estimated based on a vision sensor;representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;andrespectively representing system noise and observation noise of unknown statistical properties.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in embodiments of the invention, a target is determined; installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm; and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information. Thus, the accuracy of estimating the position of the end of the robot arm can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for estimating a position of an end of a mechanical arm based on multi-information fusion according to an embodiment of the present invention;
FIG. 2 is a block flow diagram of a method for estimating a position of an end of a mechanical arm based on multi-information fusion according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a target provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a visual sensor and a robot arm tip for estimating a spatial position according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of ground resolution provided by an embodiment of the present invention;
FIG. 6 is a schematic illustration of a target mapping in an image provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a temporal synchronization relationship between a vision sensor and a robotic arm according to an embodiment of the present invention;
fig. 8 is a schematic coordinate system diagram of a multi-information-fused mechanical arm end position estimation method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, an embodiment of the present invention provides a method for estimating a position of a distal end of a mechanical arm based on multi-information fusion, including:
s101, determining a target;
in the embodiment, a small and unique target with various colors is determined according to the control precision and the motion range of the mechanical arm; the target is a 3mm cube formed by three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different sequences, so that the identification is facilitated; and the three colors of 6 surfaces of the same target are arranged in the same order;
as shown in fig. 3, the three colors of each surface of the target form squares with sides of 1.4mm, 2.4mm and 3.0mm, and the center points of the 3 squares are the same;
the target has a through hole with a central through hole diameter of 0.2mm, so that the target can be accurately installed and fixed.
S102, mounting a visual sensor at the tail end of the mechanical arm, laying a target, and carrying out pose detection on the laid target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; as shown in fig. 4, the method may specifically include the following steps:
installing a visual sensor at the tail end of the mechanical arm to ensure that the direction of an optical axis is consistent with that of the extension rod and fixedly connected;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
by adjusting the pose states of the visual sensor and the mechanical arm, after four annular targets are projected to the central target and have equal pixel numbers on an image, the spatial relative positions of the visual sensor and the tail end of the mechanical arm are recorded by measurement。
In this embodiment, the position relationship of each target projected in the image is detected by the vision sensor, so as to calibrate the spatial position relationship between the vision sensor and the tail end of the mechanical arm.
S103, obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relation between the visual sensor and the tail end of the mechanical arm;
in this embodiment, 6 targets are uniformly laid in the region where the mechanical arm moves, and the positions of the targets in the image (specifically, the positions of the center points of the targets) are obtained in a vertical imaging manner) And obtaining the mechanical arm tail end position estimated based on the visual sensor according to the position of the target in the image and the calibrated spatial position relation between the visual sensor and the mechanical arm tail end and by combining the parameters of the visual sensor and the ground resolution.
In this embodiment, as shown in FIG. 5, the focal length in the vertical view of the vision sensor isSize of picture element. Any point in the focal planeAnd adjacent dotsAt a height ofIn the case of (1), correspondingGround resolution,Indicating a point on the focal planeThe abscissa of the (c) axis of the (c),to representThe scene point corresponding to the point is determined,to representAnd (5) corresponding scene points.
The vision sensor obtains a target image, and the position of the center point of the target is determined according to the color of the targetAnd calculating the average value according to three different side lengths of the target, wherein the average value is expressed as:
wherein the content of the first and second substances,andrespectively representing the number of pixels corresponding to three line segments from long to short in the target.
As shown in fig. 6, coordinates of a center point of an image of a known vision sensorThe coordinate of the central point of the target isPosition of spatial coordinates of the targetObtaining the end position of the mechanical arm based on the vision sensor estimation:
Wherein the content of the first and second substances,indicating the robot arm tip position estimated based on the vision sensor.
S104, under the condition of ensuring the consistency of image data acquired by the visual sensor and mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain fused mechanical arm tail end position information, and specifically, the method comprises the following steps:
in the time domain, as shown in fig. 7, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data is acquired, the rising edge of a triggering signal and the time interval of an image frame synchronization signal(ii) a Meanwhile, the controller of the mechanical arm is controlled through external trigger, the rising edge of the trigger signal and the time interval of the initial signal of the feedback data (including position and angle information) of the mechanical armThere is a time difference between the vision sensor and the robot armThat is, the image data is delayed in the time of the mechanical arm feedback data as(ii) a Thus, after triggering the vision sensor, a time delay is imposedTriggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured; as shown in fig. 8, the robot arm uses the base as a coordinate system, the vision sensor uses the ground target as a coordinate system, the target and the ground base have no axial distance, and only the horizontal position existsAndthe direction relationship; mechanical arm tail end position coordinate systemIs established in a mechanical arm base coordinate systemOn the basis of the above steps; mechanical arm and vision sensor spatially existThe difference in the positional distance of; coordinate system of vision sensorAnd the target coordinate systemWherein, the upper markIs shown asThe position of the spatial coordinates of the individual targets,taking the value of 1-6;
data fusion: in consideration of the problem that updating frequency of position information estimated by a mechanical arm and position information estimated by a visual sensor is inconsistent, data measured by the visual sensor and the mechanical arm self sensor (including an angle sensor and a position sensor) are fused in a multi-rate Kalman filtering mode to obtain fused higher-precision mechanical arm tail end position information.
In this embodiment, the filtering period may be divided into two processes of time updating and measurement updating. When the filtering time is short, the position measurement value of the visual sensor with low speed does not exist, and the filter only updates the position information of the mechanical arm; when the filtering moment, a slow-speed visual position measurement value appears, and the filter updates time and measurement simultaneously.
In this embodiment, the position information of the end of the mechanical arm after fusion is obtained and expressed as:
wherein the subscriptIs shown asImaging by a secondary vision sensor;indicating the second in the imaging period of the vision sensorCollecting the positions of the mechanical arms;representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;representing the fused tail end position information of the mechanical arm;representing the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time;representing the three-axis tail end position information estimated by the mechanical arm;representing a robot arm tip position estimated based on a vision sensor;representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;andrespectively representing system noise and observation noise of which the statistical properties are unknown.
In the present embodiment, the first and second electrodes are,representing the spatial relative position of a vision sensor to the end of a robotic armThe adjacent frame position difference.
In this embodiment, S101-S103 constitute a visual sensor-based end position estimation; s104 is the estimation of the end position based on the multi-information fusion of the visual sensor and the mechanical arm self-sensor.
The method for estimating the tail end position of the mechanical arm based on multi-information fusion at least has the following beneficial effects:
1. the invention designs a light and small cubic target consisting of red, green and blue. The target consists of three different colors, so that the identification is convenient; the target is designed in three different sizes, so that the pixel size measurement precision is high after the pixel is collected by the vision sensor conveniently; the target is provided with a central through hole, so that the target is convenient to accurately install; the target can reduce the identification of the visual sensor, improve the measurement precision of the visual sensor and facilitate the installation of the target.
2. The invention provides a method for constructing a space position model of a visual sensor and a mechanical arm through a targetEnd position model of vision sensorTherefore, the influence of the installation error of the multiple sensors on the system is reduced, and meanwhile, a complex visual sensor tail end position model is realized, so that the problem that the existing visual sensor tail end position modeling is complex is solved.
3. The invention adopts the multi-rate Kalman filtering to realize the consistency of asynchronous data in a space domain and a time domain; the characteristics of respective detection of multiple sensors are fully exerted, complementation and fusion processing of multiple information data are realized, and further the estimation precision of the tail end position of the mechanical arm is improved, so that the problems of inconsistency of airspace and time domain, complex fusion algorithm and low estimation precision of the tail end position of the mechanical arm in the prior art in multiple information fusion are solved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.
Claims (8)
1. A mechanical arm tail end position estimation method based on multi-information fusion is characterized by comprising the following steps:
determining a target;
installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm;
obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm;
and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information.
2. The method for estimating the tail end position of the mechanical arm based on the multi-information fusion as claimed in claim 1, wherein the target is a cubic target composed of three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different orders; the three colors of 6 surfaces of the same target are arranged in the same sequence;
the three colors of each surface of the target form squares with side lengths of 1.4mm, 2.4mm and 3.0mm, and the central points of the 3 squares are the same;
the target has a through hole with a central via.
3. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 1, wherein the steps of installing the visual sensor at the tail end of the mechanical arm, laying the target, and performing pose detection on the laid target through the visual sensor to calibrate the spatial position relationship between the visual sensor and the tail end of the mechanical arm comprise:
installing a visual sensor at the tail end of the mechanical arm, ensuring that the direction of an optical axis is consistent with that of the extension rod, and fixedly connecting;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
4. The method for estimating the tail end position of the mechanical arm based on multi-information fusion as claimed in claim 1, wherein the obtaining the tail end position of the mechanical arm based on the vision sensor estimation based on the calibrated spatial position relationship between the vision sensor and the tail end of the mechanical arm comprises:
the method comprises the steps of uniformly paving 6 targets in a motion area of the mechanical arm, obtaining the positions of the targets in an image in a vertical imaging mode, and obtaining the tail end position of the mechanical arm estimated based on the visual sensor according to the positions of the targets in the image and the spatial position relation between a calibrated visual sensor and the tail end of the mechanical arm and by combining the parameters of the visual sensor and the ground resolution.
5. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 4, wherein the obtained position of the tail end of the mechanical arm based on the vision sensor estimation is represented as follows:
wherein the content of the first and second substances,andindicates the estimated end of arm position based on visual sensors, indicates ground resolution,coordinates of a center point of an image representing the vision sensor,the coordinates of the center point of the target are represented,the position of the spatial coordinates of the target is represented,which represents the focal length of the lens,the size of the picture element is represented,representing the spatial relative position of the vision sensor and the end of the robotic arm.
6. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 5, wherein the ground resolution is represented as:
7. The method for estimating the tail end position of the mechanical arm based on multi-information fusion according to claim 1, wherein the step of performing data fusion by using multi-rate Kalman filtering under the condition of ensuring the consistency of image data acquired by a visual sensor and mechanical arm feedback data in a space domain and a time domain comprises the following steps of:
in the aspect of time domain, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data, the rising edge of a triggering signal and the time interval of an image frame synchronization signal are acquired(ii) a Meanwhile, the controller of the mechanical arm is controlled through external triggering, the rising edge of the triggering signal and the time interval of the initial signal of the data fed back by the mechanical armThere is a time difference between the vision sensor and the robot armAfter triggering the vision sensor, delayingTriggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured;
and fusing the data measured by the vision sensor and the sensor of the mechanical arm by adopting a multi-rate Kalman filtering mode to obtain fused tail end position information of the mechanical arm.
8. The method for estimating the position of the end of the mechanical arm based on the multi-information fusion according to claim 1, wherein the position information of the end of the mechanical arm after the fusion is obtained is expressed as:
wherein the subscriptIs shown asImaging by a secondary vision sensor;indicating the second in the imaging period of the vision sensorCollecting the positions of the mechanical arms;representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;representing the fused tail end position information of the mechanical arm;the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time is represented;representing the three-axis tail end position information estimated by the mechanical arm;presentation based on visual sensorsEstimated end of arm position;representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;andrespectively representing system noise and observation noise of unknown statistical properties.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210517157.6A CN114643598B (en) | 2022-05-13 | 2022-05-13 | Mechanical arm tail end position estimation method based on multi-information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210517157.6A CN114643598B (en) | 2022-05-13 | 2022-05-13 | Mechanical arm tail end position estimation method based on multi-information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114643598A true CN114643598A (en) | 2022-06-21 |
CN114643598B CN114643598B (en) | 2022-09-13 |
Family
ID=81997310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210517157.6A Active CN114643598B (en) | 2022-05-13 | 2022-05-13 | Mechanical arm tail end position estimation method based on multi-information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114643598B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101402199A (en) * | 2008-10-20 | 2009-04-08 | 北京理工大学 | Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation |
CN106097390A (en) * | 2016-06-13 | 2016-11-09 | 北京理工大学 | A kind of robot kinematics's parameter calibration method based on Kalman filtering |
CN108362266A (en) * | 2018-02-22 | 2018-08-03 | 北京航空航天大学 | One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system |
CN110136208A (en) * | 2019-05-20 | 2019-08-16 | 北京无远弗届科技有限公司 | A kind of the joint automatic calibration method and device of Visual Servoing System |
US10699421B1 (en) * | 2017-03-29 | 2020-06-30 | Amazon Technologies, Inc. | Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras |
CN112539746A (en) * | 2020-10-21 | 2021-03-23 | 济南大学 | Robot vision/INS combined positioning method and system based on multi-frequency Kalman filtering |
CN112917510A (en) * | 2019-12-06 | 2021-06-08 | 中国科学院沈阳自动化研究所 | Industrial robot space position appearance precision test system |
CN113643380A (en) * | 2021-08-16 | 2021-11-12 | 安徽元古纪智能科技有限公司 | Mechanical arm guiding method based on monocular camera vision target positioning |
-
2022
- 2022-05-13 CN CN202210517157.6A patent/CN114643598B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101402199A (en) * | 2008-10-20 | 2009-04-08 | 北京理工大学 | Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation |
CN106097390A (en) * | 2016-06-13 | 2016-11-09 | 北京理工大学 | A kind of robot kinematics's parameter calibration method based on Kalman filtering |
US10699421B1 (en) * | 2017-03-29 | 2020-06-30 | Amazon Technologies, Inc. | Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras |
CN108362266A (en) * | 2018-02-22 | 2018-08-03 | 北京航空航天大学 | One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system |
CN110136208A (en) * | 2019-05-20 | 2019-08-16 | 北京无远弗届科技有限公司 | A kind of the joint automatic calibration method and device of Visual Servoing System |
CN112917510A (en) * | 2019-12-06 | 2021-06-08 | 中国科学院沈阳自动化研究所 | Industrial robot space position appearance precision test system |
CN112539746A (en) * | 2020-10-21 | 2021-03-23 | 济南大学 | Robot vision/INS combined positioning method and system based on multi-frequency Kalman filtering |
CN113643380A (en) * | 2021-08-16 | 2021-11-12 | 安徽元古纪智能科技有限公司 | Mechanical arm guiding method based on monocular camera vision target positioning |
Non-Patent Citations (2)
Title |
---|
王亚威: "卡尔曼滤波与模糊逻辑结合的机械手视觉伺服控制方法研究", 《自动化技术与应用》 * |
陈益等: "简化UKF算法在摄像机标定中的应用", 《计算机工程》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114643598B (en) | 2022-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI408486B (en) | Camera with dynamic calibration and method thereof | |
CN102410832B (en) | Position and orientation measurement apparatus and position and orientation measurement method | |
CN109737883A (en) | A kind of three-dimensional deformation dynamic measurement system and measurement method based on image recognition | |
EP3032818B1 (en) | Image processing device | |
US9881377B2 (en) | Apparatus and method for determining the distinct location of an image-recording camera | |
CN103782232A (en) | Projector and control method thereof | |
CN112581545B (en) | Multi-mode heat source identification and three-dimensional space positioning system, method and storage medium | |
JP2015197344A (en) | Method and device for continuously monitoring structure displacement | |
CN112070841A (en) | Rapid combined calibration method for millimeter wave radar and camera | |
CN110415286B (en) | External parameter calibration method of multi-flight time depth camera system | |
CN113096183A (en) | Obstacle detection and measurement method based on laser radar and monocular camera | |
US11259000B2 (en) | Spatiotemporal calibration of RGB-D and displacement sensors | |
CN115830142A (en) | Camera calibration method, camera target detection and positioning method, camera calibration device, camera target detection and positioning device and electronic equipment | |
CN114488094A (en) | Vehicle-mounted multi-line laser radar and IMU external parameter automatic calibration method and device | |
CN116697888A (en) | Method and system for measuring three-dimensional coordinates and displacement of target point in motion | |
CN114643598B (en) | Mechanical arm tail end position estimation method based on multi-information fusion | |
CN114719770A (en) | Deformation monitoring method and device based on image recognition and spatial positioning technology | |
KR20030026497A (en) | Self-localization apparatus and method of mobile robot | |
CN109737871A (en) | A kind of scaling method of the relative position of three-dimension sensor and mechanical arm | |
CN112405526A (en) | Robot positioning method and device, equipment and storage medium | |
WO2021145280A1 (en) | Robot system | |
CN113012238B (en) | Method for quick calibration and data fusion of multi-depth camera | |
CN112509035A (en) | Double-lens image pixel point matching method for optical lens and thermal imaging lens | |
CN112509062B (en) | Calibration plate, calibration system and calibration method | |
CN112468801A (en) | Optical center testing method of wide-angle camera module, testing system and testing target board thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |