CN114643598A - Mechanical arm tail end position estimation method based on multi-information fusion - Google Patents

Mechanical arm tail end position estimation method based on multi-information fusion Download PDF

Info

Publication number
CN114643598A
CN114643598A CN202210517157.6A CN202210517157A CN114643598A CN 114643598 A CN114643598 A CN 114643598A CN 202210517157 A CN202210517157 A CN 202210517157A CN 114643598 A CN114643598 A CN 114643598A
Authority
CN
China
Prior art keywords
mechanical arm
tail end
visual sensor
target
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210517157.6A
Other languages
Chinese (zh)
Other versions
CN114643598B (en
Inventor
潘京辉
彭开香
潘月斗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202210517157.6A priority Critical patent/CN114643598B/en
Publication of CN114643598A publication Critical patent/CN114643598A/en
Application granted granted Critical
Publication of CN114643598B publication Critical patent/CN114643598B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention provides a mechanical arm tail end position estimation method based on multi-information fusion, and belongs to the technical field of mechanical arm pose measurement. The method comprises the following steps: determining a target; installing a visual sensor at the tail end of the mechanical arm, paving a target, and detecting the pose of the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm; and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain fused mechanical arm tail end position information. By adopting the method and the device, the estimation precision of the tail end position of the mechanical arm can be improved.

Description

Mechanical arm tail end position estimation method based on multi-information fusion
Technical Field
The invention relates to the technical field of pose measurement of mechanical arms, in particular to a mechanical arm tail end position estimation method based on multi-information fusion.
Background
The mechanical arm assists people to solve the work of difficulty, dirtiness, dryness and danger, and is widely applied to the development fields of public service, industrial manufacturing, national safety and unknown fields. In recent years, methods for achieving position estimation by assisting a robot arm with an external sensor, such as stereoscopic vision, depth vision, an inertial sensor, a scanner, and a laser tracker, have come to work. With the rapid development of electronic technology and image processing technology, vision sensors have become new mastery force in the field of mechanical arm position measurement. However, the position model established based on vision has large calculation amount, the multi-information fusion has the problems of spatial domain and time domain inconsistency, complex fusion algorithm and the like, and the terminal position cannot be accurately estimated, so that great trouble is brought to the life of people.
In summary, an accurate method for fusing multiple detection means is urgently needed for estimating the tail end position of the mechanical arm at present, so that the tail end position of the mechanical arm can be measured with high precision.
Disclosure of Invention
The embodiment of the invention provides a mechanical arm tail end position estimation method based on multi-information fusion, which can improve the mechanical arm tail end position estimation precision. The technical scheme is as follows:
the embodiment of the invention provides a mechanical arm tail end position estimation method based on multi-information fusion, which comprises the following steps:
determining a target;
installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm;
obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm;
and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information.
Further, the target is a cubic target formed by three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different sequences; the three colors of 6 surfaces of the same target are arranged in the same sequence;
three colors of each surface of the target form squares with side lengths of 1.4mm, 2.4mm and 3.0mm, and the central points of the 3 squares are the same;
the target has a through hole with a central via.
Further, the installing a visual sensor at the tail end of the mechanical arm, laying a target, and performing pose detection on the laid target through the visual sensor to calibrate the spatial position relationship between the visual sensor and the tail end of the mechanical arm includes:
installing a visual sensor at the tail end of the mechanical arm to ensure that the direction of an optical axis is consistent with that of the extension rod and fixedly connected;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
by adjusting the pose states of the visual sensor and the mechanical arm, after four annular targets are projected to the central target and have equal pixel numbers on an image, the spatial relative positions of the visual sensor and the tail end of the mechanical arm are recorded by measurement
Figure 485472DEST_PATH_IMAGE001
Further, obtaining the end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the end of the mechanical arm includes:
the method comprises the steps of uniformly paving 6 targets in a motion area of the mechanical arm, obtaining the positions of the targets in an image in a vertical imaging mode, and obtaining the tail end position of the mechanical arm estimated based on the visual sensor according to the positions of the targets in the image and the spatial position relation between a calibrated visual sensor and the tail end of the mechanical arm and by combining the parameters of the visual sensor and the ground resolution.
Further, the resulting vision sensor based estimated end-of-arm position is represented as:
Figure 21364DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 941916DEST_PATH_IMAGE003
and
Figure 39316DEST_PATH_IMAGE004
both indicate the robot arm tip position based on vision sensor estimation, indicate ground resolution,
Figure 248580DEST_PATH_IMAGE005
coordinates of a center point of an image representing the vision sensor,
Figure 396620DEST_PATH_IMAGE006
the coordinates of the center point of the target are represented,
Figure 391252DEST_PATH_IMAGE007
the position of the spatial coordinates of the target is represented,
Figure 377663DEST_PATH_IMAGE008
which represents the focal length of the lens,
Figure 589070DEST_PATH_IMAGE009
the size of the picture element is represented,
Figure 14235DEST_PATH_IMAGE010
representing the spatial relative position of the vision sensor and the end of the robotic arm.
Further, the ground resolution is expressed as:
Figure 128953DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 551844DEST_PATH_IMAGE012
and
Figure 984968DEST_PATH_IMAGE013
respectively representing the number of pixels corresponding to three line segments from long to short in the target.
Further, under the condition of ensuring the consistency of image data acquired by the visual sensor and mechanical arm feedback data in a space domain and a time domain, performing data fusion by using multi-rate kalman filtering, and obtaining the fused mechanical arm end position information includes:
in the aspect of time domain, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data, the rising edge of a triggering signal and the time interval of an image frame synchronization signal are acquired
Figure 417086DEST_PATH_IMAGE014
(ii) a Meanwhile, the controller of the mechanical arm is controlled through external triggering, the rising edge of the triggering signal and the time interval of the initial signal of the data fed back by the mechanical arm
Figure 917469DEST_PATH_IMAGE015
There is a time difference between the vision sensor and the robot arm
Figure 26108DEST_PATH_IMAGE016
After triggering the vision sensor, delaying
Figure 431682DEST_PATH_IMAGE017
Triggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured;
and fusing the data measured by the vision sensor and the sensor of the mechanical arm by adopting a multi-rate Kalman filtering mode to obtain fused tail end position information of the mechanical arm.
Further, the fused mechanical arm end position information is obtained and expressed as:
Figure 418223DEST_PATH_IMAGE018
wherein the subscript
Figure 287959DEST_PATH_IMAGE019
Is shown as
Figure 567500DEST_PATH_IMAGE019
Imaging by a secondary vision sensor;
Figure 460369DEST_PATH_IMAGE020
indicating the second in the imaging period of the vision sensor
Figure 985023DEST_PATH_IMAGE020
Collecting the positions of the mechanical arms;
Figure 912528DEST_PATH_IMAGE021
representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;
Figure 159707DEST_PATH_IMAGE022
representing the fused tail end position information of the mechanical arm;
Figure 556184DEST_PATH_IMAGE023
representing the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time;
Figure 602638DEST_PATH_IMAGE024
representing the three-axis tail end position information estimated by the mechanical arm;
Figure 899496DEST_PATH_IMAGE025
representing a robot arm tip position estimated based on a vision sensor;
Figure 271571DEST_PATH_IMAGE026
representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;
Figure 952082DEST_PATH_IMAGE027
and
Figure 67806DEST_PATH_IMAGE028
respectively representing system noise and observation noise of unknown statistical properties.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in embodiments of the invention, a target is determined; installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm; and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information. Thus, the accuracy of estimating the position of the end of the robot arm can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for estimating a position of an end of a mechanical arm based on multi-information fusion according to an embodiment of the present invention;
FIG. 2 is a block flow diagram of a method for estimating a position of an end of a mechanical arm based on multi-information fusion according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of a target provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a visual sensor and a robot arm tip for estimating a spatial position according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of ground resolution provided by an embodiment of the present invention;
FIG. 6 is a schematic illustration of a target mapping in an image provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a temporal synchronization relationship between a vision sensor and a robotic arm according to an embodiment of the present invention;
fig. 8 is a schematic coordinate system diagram of a multi-information-fused mechanical arm end position estimation method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, an embodiment of the present invention provides a method for estimating a position of a distal end of a mechanical arm based on multi-information fusion, including:
s101, determining a target;
in the embodiment, a small and unique target with various colors is determined according to the control precision and the motion range of the mechanical arm; the target is a 3mm cube formed by three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different sequences, so that the identification is facilitated; and the three colors of 6 surfaces of the same target are arranged in the same order;
as shown in fig. 3, the three colors of each surface of the target form squares with sides of 1.4mm, 2.4mm and 3.0mm, and the center points of the 3 squares are the same;
the target has a through hole with a central through hole diameter of 0.2mm, so that the target can be accurately installed and fixed.
S102, mounting a visual sensor at the tail end of the mechanical arm, laying a target, and carrying out pose detection on the laid target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm; as shown in fig. 4, the method may specifically include the following steps:
installing a visual sensor at the tail end of the mechanical arm to ensure that the direction of an optical axis is consistent with that of the extension rod and fixedly connected;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
by adjusting the pose states of the visual sensor and the mechanical arm, after four annular targets are projected to the central target and have equal pixel numbers on an image, the spatial relative positions of the visual sensor and the tail end of the mechanical arm are recorded by measurement
Figure 953591DEST_PATH_IMAGE001
In this embodiment, the position relationship of each target projected in the image is detected by the vision sensor, so as to calibrate the spatial position relationship between the vision sensor and the tail end of the mechanical arm.
S103, obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relation between the visual sensor and the tail end of the mechanical arm;
in this embodiment, 6 targets are uniformly laid in the region where the mechanical arm moves, and the positions of the targets in the image (specifically, the positions of the center points of the targets) are obtained in a vertical imaging manner
Figure 496568DEST_PATH_IMAGE029
) And obtaining the mechanical arm tail end position estimated based on the visual sensor according to the position of the target in the image and the calibrated spatial position relation between the visual sensor and the mechanical arm tail end and by combining the parameters of the visual sensor and the ground resolution.
In this embodiment, as shown in FIG. 5, the focal length in the vertical view of the vision sensor is
Figure 398796DEST_PATH_IMAGE008
Size of picture element
Figure 110355DEST_PATH_IMAGE009
. Any point in the focal plane
Figure 414429DEST_PATH_IMAGE030
And adjacent dots
Figure 128307DEST_PATH_IMAGE031
At a height of
Figure 281946DEST_PATH_IMAGE032
In the case of (1), corresponding
Figure 490204DEST_PATH_IMAGE033
Ground resolution
Figure 835735DEST_PATH_IMAGE034
Figure 235361DEST_PATH_IMAGE035
Indicating a point on the focal plane
Figure 564711DEST_PATH_IMAGE036
The abscissa of the (c) axis of the (c),
Figure 373398DEST_PATH_IMAGE037
to represent
Figure 839015DEST_PATH_IMAGE038
The scene point corresponding to the point is determined,
Figure 143963DEST_PATH_IMAGE039
to represent
Figure 960609DEST_PATH_IMAGE040
And (5) corresponding scene points.
The vision sensor obtains a target image, and the position of the center point of the target is determined according to the color of the target
Figure 510670DEST_PATH_IMAGE029
And calculating the average value according to three different side lengths of the target, wherein the average value is expressed as:
Figure 142378DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 634539DEST_PATH_IMAGE012
and
Figure 423635DEST_PATH_IMAGE013
respectively representing the number of pixels corresponding to three line segments from long to short in the target.
As shown in fig. 6, coordinates of a center point of an image of a known vision sensor
Figure 761075DEST_PATH_IMAGE005
The coordinate of the central point of the target is
Figure 716130DEST_PATH_IMAGE006
Position of spatial coordinates of the target
Figure 910351DEST_PATH_IMAGE007
Obtaining the end position of the mechanical arm based on the vision sensor estimation
Figure 452322DEST_PATH_IMAGE041
Figure 593454DEST_PATH_IMAGE002
Wherein the content of the first and second substances,
Figure 137436DEST_PATH_IMAGE003
indicating the robot arm tip position estimated based on the vision sensor.
S104, under the condition of ensuring the consistency of image data acquired by the visual sensor and mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain fused mechanical arm tail end position information, and specifically, the method comprises the following steps:
in the time domain, as shown in fig. 7, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data is acquired, the rising edge of a triggering signal and the time interval of an image frame synchronization signal
Figure 705821DEST_PATH_IMAGE014
(ii) a Meanwhile, the controller of the mechanical arm is controlled through external trigger, the rising edge of the trigger signal and the time interval of the initial signal of the feedback data (including position and angle information) of the mechanical arm
Figure 531826DEST_PATH_IMAGE015
There is a time difference between the vision sensor and the robot arm
Figure 725915DEST_PATH_IMAGE016
That is, the image data is delayed in the time of the mechanical arm feedback data as
Figure 875137DEST_PATH_IMAGE017
(ii) a Thus, after triggering the vision sensor, a time delay is imposed
Figure 630735DEST_PATH_IMAGE042
Triggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured; as shown in fig. 8, the robot arm uses the base as a coordinate system, the vision sensor uses the ground target as a coordinate system, the target and the ground base have no axial distance, and only the horizontal position exists
Figure 130986DEST_PATH_IMAGE043
And
Figure 671644DEST_PATH_IMAGE044
the direction relationship; mechanical arm tail end position coordinate system
Figure 206531DEST_PATH_IMAGE045
Is established in a mechanical arm base coordinate system
Figure 601871DEST_PATH_IMAGE046
On the basis of the above steps; mechanical arm and vision sensor spatially exist
Figure 589418DEST_PATH_IMAGE047
The difference in the positional distance of; coordinate system of vision sensor
Figure 922049DEST_PATH_IMAGE048
And the target coordinate system
Figure 62174DEST_PATH_IMAGE049
Wherein, the upper mark
Figure 143263DEST_PATH_IMAGE050
Is shown as
Figure 867374DEST_PATH_IMAGE051
The position of the spatial coordinates of the individual targets,
Figure 692110DEST_PATH_IMAGE051
taking the value of 1-6;
data fusion: in consideration of the problem that updating frequency of position information estimated by a mechanical arm and position information estimated by a visual sensor is inconsistent, data measured by the visual sensor and the mechanical arm self sensor (including an angle sensor and a position sensor) are fused in a multi-rate Kalman filtering mode to obtain fused higher-precision mechanical arm tail end position information.
In this embodiment, the filtering period may be divided into two processes of time updating and measurement updating. When the filtering time is short, the position measurement value of the visual sensor with low speed does not exist, and the filter only updates the position information of the mechanical arm; when the filtering moment, a slow-speed visual position measurement value appears, and the filter updates time and measurement simultaneously.
In this embodiment, the position information of the end of the mechanical arm after fusion is obtained and expressed as:
Figure 483480DEST_PATH_IMAGE018
wherein the subscript
Figure 1049DEST_PATH_IMAGE019
Is shown as
Figure 212456DEST_PATH_IMAGE019
Imaging by a secondary vision sensor;
Figure 575305DEST_PATH_IMAGE020
indicating the second in the imaging period of the vision sensor
Figure 424443DEST_PATH_IMAGE020
Collecting the positions of the mechanical arms;
Figure 158918DEST_PATH_IMAGE021
representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;
Figure 608354DEST_PATH_IMAGE022
representing the fused tail end position information of the mechanical arm;
Figure 525626DEST_PATH_IMAGE023
representing the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time;
Figure 478538DEST_PATH_IMAGE024
representing the three-axis tail end position information estimated by the mechanical arm;
Figure 587178DEST_PATH_IMAGE025
representing a robot arm tip position estimated based on a vision sensor;
Figure 55068DEST_PATH_IMAGE026
representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;
Figure 41610DEST_PATH_IMAGE027
and
Figure 114608DEST_PATH_IMAGE028
respectively representing system noise and observation noise of which the statistical properties are unknown.
In the present embodiment, the first and second electrodes are,
Figure 128569DEST_PATH_IMAGE023
representing the spatial relative position of a vision sensor to the end of a robotic arm
Figure 287018DEST_PATH_IMAGE010
The adjacent frame position difference.
In this embodiment, S101-S103 constitute a visual sensor-based end position estimation; s104 is the estimation of the end position based on the multi-information fusion of the visual sensor and the mechanical arm self-sensor.
The method for estimating the tail end position of the mechanical arm based on multi-information fusion at least has the following beneficial effects:
1. the invention designs a light and small cubic target consisting of red, green and blue. The target consists of three different colors, so that the identification is convenient; the target is designed in three different sizes, so that the pixel size measurement precision is high after the pixel is collected by the vision sensor conveniently; the target is provided with a central through hole, so that the target is convenient to accurately install; the target can reduce the identification of the visual sensor, improve the measurement precision of the visual sensor and facilitate the installation of the target.
2. The invention provides a method for constructing a space position model of a visual sensor and a mechanical arm through a target
Figure 342830DEST_PATH_IMAGE010
End position model of vision sensor
Figure 785181DEST_PATH_IMAGE048
Therefore, the influence of the installation error of the multiple sensors on the system is reduced, and meanwhile, a complex visual sensor tail end position model is realized, so that the problem that the existing visual sensor tail end position modeling is complex is solved.
3. The invention adopts the multi-rate Kalman filtering to realize the consistency of asynchronous data in a space domain and a time domain; the characteristics of respective detection of multiple sensors are fully exerted, complementation and fusion processing of multiple information data are realized, and further the estimation precision of the tail end position of the mechanical arm is improved, so that the problems of inconsistency of airspace and time domain, complex fusion algorithm and low estimation precision of the tail end position of the mechanical arm in the prior art in multiple information fusion are solved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (8)

1. A mechanical arm tail end position estimation method based on multi-information fusion is characterized by comprising the following steps:
determining a target;
installing a visual sensor at the tail end of the mechanical arm, paving a target, and carrying out pose detection on the paved target through the visual sensor so as to calibrate the spatial position relation between the visual sensor and the tail end of the mechanical arm;
obtaining the tail end position of the mechanical arm estimated based on the visual sensor based on the calibrated spatial position relationship between the visual sensor and the tail end of the mechanical arm;
and under the condition of ensuring the consistency of the image data acquired by the visual sensor and the mechanical arm feedback data in a space domain and a time domain, performing data fusion by adopting multi-rate Kalman filtering to obtain the fused mechanical arm tail end position information.
2. The method for estimating the tail end position of the mechanical arm based on the multi-information fusion as claimed in claim 1, wherein the target is a cubic target composed of three colors of red, green and blue, and 6 targets with different colors are formed by arranging the three colors in different orders; the three colors of 6 surfaces of the same target are arranged in the same sequence;
the three colors of each surface of the target form squares with side lengths of 1.4mm, 2.4mm and 3.0mm, and the central points of the 3 squares are the same;
the target has a through hole with a central via.
3. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 1, wherein the steps of installing the visual sensor at the tail end of the mechanical arm, laying the target, and performing pose detection on the laid target through the visual sensor to calibrate the spatial position relationship between the visual sensor and the tail end of the mechanical arm comprise:
installing a visual sensor at the tail end of the mechanical arm, ensuring that the direction of an optical axis is consistent with that of the extension rod, and fixedly connecting;
selecting five targets, laying four targets into a ring shape, and placing the rest of 5 targets in the center of the ring shape;
by adjusting the pose states of the visual sensor and the mechanical arm, after four annular targets are projected to the central target and have equal pixel numbers on an image, the spatial relative positions of the visual sensor and the tail end of the mechanical arm are recorded by measurement
Figure 447659DEST_PATH_IMAGE001
4. The method for estimating the tail end position of the mechanical arm based on multi-information fusion as claimed in claim 1, wherein the obtaining the tail end position of the mechanical arm based on the vision sensor estimation based on the calibrated spatial position relationship between the vision sensor and the tail end of the mechanical arm comprises:
the method comprises the steps of uniformly paving 6 targets in a motion area of the mechanical arm, obtaining the positions of the targets in an image in a vertical imaging mode, and obtaining the tail end position of the mechanical arm estimated based on the visual sensor according to the positions of the targets in the image and the spatial position relation between a calibrated visual sensor and the tail end of the mechanical arm and by combining the parameters of the visual sensor and the ground resolution.
5. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 4, wherein the obtained position of the tail end of the mechanical arm based on the vision sensor estimation is represented as follows:
Figure 410804DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 800329DEST_PATH_IMAGE003
and
Figure 257855DEST_PATH_IMAGE004
indicates the estimated end of arm position based on visual sensors, indicates ground resolution,
Figure 118232DEST_PATH_IMAGE005
coordinates of a center point of an image representing the vision sensor,
Figure 799749DEST_PATH_IMAGE006
the coordinates of the center point of the target are represented,
Figure 614253DEST_PATH_IMAGE007
the position of the spatial coordinates of the target is represented,
Figure 933194DEST_PATH_IMAGE008
which represents the focal length of the lens,
Figure 398810DEST_PATH_IMAGE009
the size of the picture element is represented,
Figure 205223DEST_PATH_IMAGE010
representing the spatial relative position of the vision sensor and the end of the robotic arm.
6. The method for estimating the position of the tail end of the mechanical arm based on multi-information fusion according to claim 5, wherein the ground resolution is represented as:
Figure 818607DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 867203DEST_PATH_IMAGE012
and
Figure 187326DEST_PATH_IMAGE013
respectively representing the number of pixels corresponding to three line segments from long to short in the target.
7. The method for estimating the tail end position of the mechanical arm based on multi-information fusion according to claim 1, wherein the step of performing data fusion by using multi-rate Kalman filtering under the condition of ensuring the consistency of image data acquired by a visual sensor and mechanical arm feedback data in a space domain and a time domain comprises the following steps of:
in the aspect of time domain, an embedded hardware circuit is adopted to control the visual sensor to image through external triggering, and image data, the rising edge of a triggering signal and the time interval of an image frame synchronization signal are acquired
Figure 226958DEST_PATH_IMAGE014
(ii) a Meanwhile, the controller of the mechanical arm is controlled through external triggering, the rising edge of the triggering signal and the time interval of the initial signal of the data fed back by the mechanical arm
Figure 780168DEST_PATH_IMAGE015
There is a time difference between the vision sensor and the robot arm
Figure 852029DEST_PATH_IMAGE016
After triggering the vision sensor, delaying
Figure 308549DEST_PATH_IMAGE017
Triggering the mechanical arm controller after time to ensure the synchronization of the image data and the feedback data of the mechanical arm;
in the aspect of airspace, according to the spatial position relation between the calibrated visual sensor and the tail end of the mechanical arm, the spatial consistency of image data and mechanical arm feedback data is ensured;
and fusing the data measured by the vision sensor and the sensor of the mechanical arm by adopting a multi-rate Kalman filtering mode to obtain fused tail end position information of the mechanical arm.
8. The method for estimating the position of the end of the mechanical arm based on the multi-information fusion according to claim 1, wherein the position information of the end of the mechanical arm after the fusion is obtained is expressed as:
Figure 502770DEST_PATH_IMAGE018
wherein the subscript
Figure 277697DEST_PATH_IMAGE019
Is shown as
Figure 418828DEST_PATH_IMAGE019
Imaging by a secondary vision sensor;
Figure 526593DEST_PATH_IMAGE020
indicating the second in the imaging period of the vision sensor
Figure 609824DEST_PATH_IMAGE020
Collecting the positions of the mechanical arms;
Figure 357200DEST_PATH_IMAGE021
representing the ratio of the imaging period of the visual sensor to the period of mechanical arm acquisition;
Figure 52755DEST_PATH_IMAGE022
representing the fused tail end position information of the mechanical arm;
Figure 264293DEST_PATH_IMAGE023
the difference value of the position difference of the adjacent frame vision sensor and the position difference of the mechanical arm at the same time is represented;
Figure 518426DEST_PATH_IMAGE024
representing the three-axis tail end position information estimated by the mechanical arm;
Figure 753098DEST_PATH_IMAGE025
presentation based on visual sensorsEstimated end of arm position;
Figure 783502DEST_PATH_IMAGE026
representing the difference of the installation positions of the visual sensor and the tail end of the mechanical arm;
Figure 567656DEST_PATH_IMAGE027
and
Figure 477844DEST_PATH_IMAGE028
respectively representing system noise and observation noise of unknown statistical properties.
CN202210517157.6A 2022-05-13 2022-05-13 Mechanical arm tail end position estimation method based on multi-information fusion Active CN114643598B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210517157.6A CN114643598B (en) 2022-05-13 2022-05-13 Mechanical arm tail end position estimation method based on multi-information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210517157.6A CN114643598B (en) 2022-05-13 2022-05-13 Mechanical arm tail end position estimation method based on multi-information fusion

Publications (2)

Publication Number Publication Date
CN114643598A true CN114643598A (en) 2022-06-21
CN114643598B CN114643598B (en) 2022-09-13

Family

ID=81997310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210517157.6A Active CN114643598B (en) 2022-05-13 2022-05-13 Mechanical arm tail end position estimation method based on multi-information fusion

Country Status (1)

Country Link
CN (1) CN114643598B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN106097390A (en) * 2016-06-13 2016-11-09 北京理工大学 A kind of robot kinematics's parameter calibration method based on Kalman filtering
CN108362266A (en) * 2018-02-22 2018-08-03 北京航空航天大学 One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system
CN110136208A (en) * 2019-05-20 2019-08-16 北京无远弗届科技有限公司 A kind of the joint automatic calibration method and device of Visual Servoing System
US10699421B1 (en) * 2017-03-29 2020-06-30 Amazon Technologies, Inc. Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras
CN112539746A (en) * 2020-10-21 2021-03-23 济南大学 Robot vision/INS combined positioning method and system based on multi-frequency Kalman filtering
CN112917510A (en) * 2019-12-06 2021-06-08 中国科学院沈阳自动化研究所 Industrial robot space position appearance precision test system
CN113643380A (en) * 2021-08-16 2021-11-12 安徽元古纪智能科技有限公司 Mechanical arm guiding method based on monocular camera vision target positioning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN106097390A (en) * 2016-06-13 2016-11-09 北京理工大学 A kind of robot kinematics's parameter calibration method based on Kalman filtering
US10699421B1 (en) * 2017-03-29 2020-06-30 Amazon Technologies, Inc. Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras
CN108362266A (en) * 2018-02-22 2018-08-03 北京航空航天大学 One kind is based on EKF laser rangings auxiliary monocular vision measurement method and system
CN110136208A (en) * 2019-05-20 2019-08-16 北京无远弗届科技有限公司 A kind of the joint automatic calibration method and device of Visual Servoing System
CN112917510A (en) * 2019-12-06 2021-06-08 中国科学院沈阳自动化研究所 Industrial robot space position appearance precision test system
CN112539746A (en) * 2020-10-21 2021-03-23 济南大学 Robot vision/INS combined positioning method and system based on multi-frequency Kalman filtering
CN113643380A (en) * 2021-08-16 2021-11-12 安徽元古纪智能科技有限公司 Mechanical arm guiding method based on monocular camera vision target positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王亚威: "卡尔曼滤波与模糊逻辑结合的机械手视觉伺服控制方法研究", 《自动化技术与应用》 *
陈益等: "简化UKF算法在摄像机标定中的应用", 《计算机工程》 *

Also Published As

Publication number Publication date
CN114643598B (en) 2022-09-13

Similar Documents

Publication Publication Date Title
TWI408486B (en) Camera with dynamic calibration and method thereof
CN102410832B (en) Position and orientation measurement apparatus and position and orientation measurement method
CN109737883A (en) A kind of three-dimensional deformation dynamic measurement system and measurement method based on image recognition
EP3032818B1 (en) Image processing device
US9881377B2 (en) Apparatus and method for determining the distinct location of an image-recording camera
CN103782232A (en) Projector and control method thereof
CN112581545B (en) Multi-mode heat source identification and three-dimensional space positioning system, method and storage medium
JP2015197344A (en) Method and device for continuously monitoring structure displacement
CN112070841A (en) Rapid combined calibration method for millimeter wave radar and camera
CN110415286B (en) External parameter calibration method of multi-flight time depth camera system
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
US11259000B2 (en) Spatiotemporal calibration of RGB-D and displacement sensors
CN115830142A (en) Camera calibration method, camera target detection and positioning method, camera calibration device, camera target detection and positioning device and electronic equipment
CN114488094A (en) Vehicle-mounted multi-line laser radar and IMU external parameter automatic calibration method and device
CN116697888A (en) Method and system for measuring three-dimensional coordinates and displacement of target point in motion
CN114643598B (en) Mechanical arm tail end position estimation method based on multi-information fusion
CN114719770A (en) Deformation monitoring method and device based on image recognition and spatial positioning technology
KR20030026497A (en) Self-localization apparatus and method of mobile robot
CN109737871A (en) A kind of scaling method of the relative position of three-dimension sensor and mechanical arm
CN112405526A (en) Robot positioning method and device, equipment and storage medium
WO2021145280A1 (en) Robot system
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN112509035A (en) Double-lens image pixel point matching method for optical lens and thermal imaging lens
CN112509062B (en) Calibration plate, calibration system and calibration method
CN112468801A (en) Optical center testing method of wide-angle camera module, testing system and testing target board thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant