CN112862812A - Method, system and device for optimizing operation space trajectory of flexible robot - Google Patents
Method, system and device for optimizing operation space trajectory of flexible robot Download PDFInfo
- Publication number
- CN112862812A CN112862812A CN202110260593.5A CN202110260593A CN112862812A CN 112862812 A CN112862812 A CN 112862812A CN 202110260593 A CN202110260593 A CN 202110260593A CN 112862812 A CN112862812 A CN 112862812A
- Authority
- CN
- China
- Prior art keywords
- arm
- flexible robot
- image
- camera
- ellipse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005457 optimization Methods 0.000 claims abstract description 62
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims abstract description 9
- 238000005259 measurement Methods 0.000 claims description 37
- 230000004927 fusion Effects 0.000 claims description 16
- 239000013598 vector Substances 0.000 claims description 14
- 238000003384 imaging method Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 239000002245 particle Substances 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000012887 quadratic function Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a method, a system and a device for optimizing an operation space track of a flexible robot, wherein the method comprises the following steps: acquiring an image and preprocessing the image; extracting the characteristics of the preprocessed image; calculating the tail end pose and arm type parameters of the flexible robot based on the extracted ellipse parameters; calculating an error; constructing a space optimization model and obtaining an optimal joint angle; and judging that the error is smaller than a preset threshold value, and finishing the optimization. The system comprises: the device comprises an image processing module, a feature extraction module, a kinematic parameter calculation module, an error calculation module, a track optimization module and a threshold judgment module. The device comprises a base, a flexible robot, a fixed camera, an auxiliary camera and an auxiliary mechanical arm. By using the invention, the arm type of the flexible robot can be sensed in real time, and the motion precision of the robot is improved. The method, the system and the device for optimizing the operation space track of the flexible robot can be widely applied to the field of robot track optimization.
Description
Technical Field
The invention relates to the field of robot trajectory optimization, in particular to a method, a system and a device for optimizing an operation space trajectory of a flexible robot.
Background
The rope-driven super-redundant robot has the characteristics of fine size, electromechanical separation, flexible movement and light rod compared with the robot with the traditional structure. These characteristics make it well suited for applications in small confined spaces and complex unstructured scenes. Years of research shows that the super-redundant robot is more and more widely applied to space on-orbit service, and various characteristics in the aspects of structure, driving mode and the like make the super-redundant robot very suitable for space complex environment, and particularly has the characteristics of strong bending capability, simple structure, small motion load and the like. However, since the arm lever is slender and is a cambered surface in the radial direction, the traditional method for adding the cooperation mark to perform stereoscopic vision measurement is inconvenient to operate due to the great difficulty in processing on the cambered surface, and meanwhile, a mathematical model between a measurement coordinate system and a reference coordinate system is difficult to establish. In addition, in the task execution process of the super-redundant robot, the situations of shielding or incomplete edge feature extraction and the like easily occur between different arm levers, and the problem that the arm type perception error is large or even the three-dimensional reconstruction method fails occurs in the traditional stereoscopic vision measurement under the situation, so that the functional requirement of the super-redundant robot for passing through a narrow space under the multiple constraints of arm types and joints cannot be met.
Disclosure of Invention
The invention aims to provide a method, a system and a device for optimizing an operation space track of a flexible robot, which can overcome the problems of characteristic shielding and no cooperative marker, sense the arm type of the flexible robot in real time and improve the motion precision of the robot.
The first technical scheme adopted by the invention is as follows: a method for optimizing an operation space track of a flexible robot comprises the following steps:
acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
performing feature extraction on the preprocessed arm lever image to obtain an ellipse parameter;
calculating to obtain the tail end pose and arm type parameters of the flexible robot based on the ellipse parameters;
calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
obtaining an optimal joint angle according to the pose difference and a pre-constructed space optimization model;
and judging that the arm type parameter deviation and the pose difference are smaller than a preset threshold value, and finishing the optimization.
Further, the step of obtaining an arm section image and preprocessing the image to obtain a preprocessed arm image specifically includes:
performing image extraction on the section of the arm lever based on a plurality of cameras arranged around the flexible robot to obtain an arm lever section image;
and carrying out gray-scale image conversion, image segmentation and image filtering pretreatment on the arm lever section image to obtain a pretreated arm lever image.
Further, the step of extracting features of the preprocessed arm lever image to obtain an ellipse parameter specifically includes:
carrying out edge detection on the preprocessed arm lever image to obtain discrete elliptic pixel points;
fitting the discrete elliptic pixel points to obtain rough elliptic fitting parameters;
and processing the rough ellipse fitting parameters based on a pre-constructed optimization model of the multi-objective fusion pose measurement error to obtain accurate ellipse parameters.
Further, the step of fitting the discrete ellipse pixel points to obtain rough ellipse fitting parameters specifically includes:
representing the imaging of the arm lever on an image plane by a binary quadratic function of the space elliptical arc to obtain an imaging pixel point;
and fitting the imaging pixel points through a nonlinear least square equation to obtain rough ellipse fitting parameters.
Further, the step of processing the rough ellipse fitting parameters based on the pre-constructed optimization model for multi-objective fusion pose measurement errors to obtain accurate ellipse parameters specifically includes:
unifying the coordinate systems of the cameras to be in the same coordinate system;
constructing an optimization model of multi-view fusion pose measurement errors and optimizing ellipse parameters acquired by each camera;
and solving the optimization model of the multi-objective fusion pose measurement error based on a particle swarm optimization algorithm to obtain accurate elliptical parameters.
Further, the expression of the optimization model of the multi-view fusion pose measurement error is as follows:
wherein k represents an elliptical arc on the image planeThe k-th pixel point, E represents the fitted ellipse mark,the relative deviation between the end position measured by the i-th camera and the end position measured by the 1 st camera in the 1 st camera system is shown,the relative deviation of the normal vector of the tail end measured by the No. i camera and the normal vector of the tail end measured by the No. 1 camera under the No. 1 camera system is shown,the representation represents the firstCamera No. i measures the fitted ellipse center abscissa,the abscissa indicating that the ith camera acquires the k discrete pixel point,represents the ordinate of the center of the ellipse after the i-th camera measurement is fitted,the vertical coordinate of the k discrete pixel point acquired by the i camera is shown,represents the minor semi-axis of the ellipse after the i-th camera measurement is fitted,represents the ellipse major semi-axis after the i-th camera measurement is fitted,represents the fitted elliptical bias angle for camera measurement No. i.
Further, the step of calculating and obtaining the terminal pose and the arm shape parameters of the flexible robot based on the ellipse parameters specifically includes:
calculating the position and posture of the center of the section of the arm rod of the flexible robot according to the accurate ellipse parameters;
and calculating and acquiring the central pose of each cylindrical arm segment of the flexible robot based on kinematics, and acquiring the configuration of the flexible robot and the pose of the tail end of the flexible robot.
Further, the pre-constructed space optimization model is specifically a space optimization model established for a task target of the flexible robot crossing the slit, and includes:
and establishing an operation space optimization model under multiple constraints of an arm type and a joint angle based on minimization of the terminal pose difference of the flexible robot, the direction of an arm segment in the slit is perpendicular to a normal vector of the slit as far as possible, and the condition of actual joint angle limitation is considered.
The second technical scheme adopted by the invention is as follows: an operating space trajectory optimization system of a flexible robot, comprising:
the image processing module is used for acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
the feature extraction module is used for extracting features of the preprocessed arm lever image to obtain an ellipse parameter;
the kinematic parameter calculation module is used for calculating the end pose and the arm type parameters of the flexible robot based on the ellipse parameters;
the error calculation module is used for calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
the operation space track optimization module is used for obtaining an optimal joint angle according to the pose difference and a pre-constructed space optimization model;
and the threshold value judging module is used for judging that the arm type parameter deviation and the pose difference are smaller than a preset threshold value, and finishing optimization.
The third technical scheme adopted by the invention is as follows: an operation space trajectory optimization device of a flexible robot comprises a base, the flexible robot, a fixed camera, an auxiliary camera and an auxiliary mechanical arm, wherein:
the flexible robot is fixed on the base and is a rope-driven super-redundant mechanical arm, the arm lever of the flexible robot is generally a cylinder, and the radial section of the flexible robot is circular;
the fixed camera is fixed on the base and shoots the flexible robot;
the movable camera is fixed at the tail end of the auxiliary mechanical arm, and images at different angles can be acquired by adjusting the auxiliary mechanical arm.
The method, the system and the device have the advantages that: according to the method, the plurality of cameras are used for shooting at different axial visual angles of the arm rod to obtain the arm-shaped image, and the information of the plurality of monocular images is fused, so that the problems that the measurement accuracy is not high under the condition that effective pixel points of the monocular cameras are few and the reconstruction is invalid when the characteristics are shielded in the traditional vision measurement method are solved, accurate circular section projection information is obtained through an optimization equation of multi-view fusion, and the measurement accuracy of the robot configuration is improved. Meanwhile, the high-precision arm type sensing result and the tail end are synchronously constrained, an optimization model is established, the operation space trajectory can be optimized under the multiple constraints of the arm type and the joint angle, and the control precision of the flexible robot is greatly improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of an operation space trajectory optimization method of a flexible robot according to the present invention;
FIG. 2 is a block diagram of an operation space trajectory optimization system of a flexible robot according to the present invention;
FIG. 3 is a schematic structural diagram of an operation space trajectory optimization device of a flexible robot according to the present invention;
fig. 4 is a schematic diagram of a flexible robot for performing tasks across a slit according to an embodiment of the present invention.
Reference numerals: 1. a flexible robot; 2. fixing the camera; 3. a movable camera; 4. an auxiliary mechanical arm; 5. a base.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1, the present invention provides an operation space trajectory optimization method of a flexible robot, including the following steps:
acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
performing feature extraction on the preprocessed arm lever image to obtain an ellipse parameter;
calculating to obtain the tail end pose and arm type parameters of the flexible robot based on the ellipse parameters;
calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
obtaining an optimal joint angle according to the pose difference and a pre-constructed space optimization model;
and judging that the arm type parameter deviation and the pose difference are smaller than a preset threshold value, and finishing the optimization.
Further, as a preferred embodiment of the method, the step of obtaining the arm section image and preprocessing the image to obtain a preprocessed arm image specifically includes:
performing image extraction on the section of the arm lever based on a plurality of cameras arranged around the flexible robot to obtain an arm lever section image;
and carrying out gray-scale image conversion, image segmentation and image filtering pretreatment on the arm lever section image to obtain a pretreated arm lever image.
Further, as a preferred embodiment of the method, the step of performing feature extraction on the preprocessed arm bar image to obtain the ellipse parameters specifically includes:
carrying out edge detection on the preprocessed arm lever image to obtain discrete elliptic pixel points;
fitting the discrete elliptic pixel points to obtain rough elliptic fitting parameters;
and processing the rough ellipse fitting parameters based on a pre-constructed optimization model of the multi-objective fusion pose measurement error to obtain accurate ellipse parameters.
Further, as a preferred embodiment of the method, the step of fitting the discrete ellipse pixel points to obtain rough ellipse fitting parameters specifically includes:
representing the imaging of the arm lever on an image plane by a binary quadratic function of the space elliptical arc to obtain an imaging pixel point;
and fitting the imaging pixel points through a nonlinear least square equation to obtain rough ellipse fitting parameters.
In particular, the projection of the cross section of the circular arm segment in the image plane is an ellipse or a straight line, represented by a binary quadratic equation, taking a typical elliptical projection as an exampleSpace elliptic arc
Wherein,and u represents the abscissa of the image plane of the camera No. i, and v represents the ordinate of the image plane of the camera No. i.
Establishing a nonlinear least square equation to fit the imaging pixel points as follows:
According to the minimum algebraic distance method, equation (2) can be equivalent to:
performing pose measurement modeling on the elliptic features acquired by a single camera, and assuming that 5 elliptic parameters displayed by an unknown circular target in an image plane of the No. i camera areThe ellipse equation in the image plane can be further simplified as:
the formula (6) may be substituted for the formula (4):
The position and the posture of the center of the circular target recognized by the camera No. i under the camera system are respectively expressed as follows:
here, ,is H i1, …, n), andis an orthogonal unit vector corresponding to the feature vector,is the radius of the circular object.
Further, as a preferred embodiment of the method, the step of processing the rough ellipse fitting parameters based on the pre-constructed optimization model of the multi-objective fusion pose measurement error to obtain accurate ellipse parameters specifically includes:
unifying the coordinate systems of the cameras to be in the same coordinate system;
constructing an optimization model of multi-view fusion pose measurement errors and optimizing ellipse parameters acquired by each camera;
and solving the optimization model of the multi-objective fusion pose measurement error based on a particle swarm optimization algorithm to obtain accurate elliptical parameters.
Specifically, the position and the posture of the center of the circular target recognized by the j-th camera under the i-th camera are respectively expressed as:
wherein,andare respectively CiC in reference systemjA rotation homogeneous transformation matrix and a translation homogeneous transformation matrix of the coordinate system.
The pose deviation of the tail end pose measured by the i-th camera and the tail end pose measured by the 1 st camera under the 1 st camera system can be simplified as follows:
thus, an optimization model under cooperative measurement of multiple cameras can be established:
wherein k represents an elliptical arc on the image planeThe k-th pixel point, E represents the fitted ellipse mark,the relative deviation between the end position measured by the i-th camera and the end position measured by the 1 st camera in the 1 st camera system is shown,the relative deviation of the normal vector of the tail end measured by the No. i camera and the normal vector of the tail end measured by the No. 1 camera under the No. 1 camera system is shown,the abscissa representing the center of the ellipse after fitting with the camera measurement No. i,the abscissa indicating that the ith camera acquires the k discrete pixel point,represents the ordinate of the center of the ellipse after the i-th camera measurement is fitted,the vertical coordinate of the k discrete pixel point acquired by the i camera is shown,represents the minor semi-axis of the ellipse after the i-th camera measurement is fitted,represents the ellipse major semi-axis after the i-th camera measurement is fitted,represents the fitted elliptical bias angle for camera measurement No. i.
The optimized ellipse parameters can be obtained by solving the formula (14) by a particle swarm optimization methodAndsubstituting the position into (12) and (13) to obtain the position of the center of the circular section of the arm rod, and recording the position as
Further, as a preferred embodiment of the method, the step of obtaining the end pose and the arm type parameter of the flexible robot based on the ellipse parameter calculation specifically includes:
calculating the position and posture of the center of the section of the arm rod of the flexible robot according to the accurate ellipse parameters;
and calculating and acquiring the central pose of each cylindrical arm segment of the flexible robot based on kinematics, and acquiring the configuration of the flexible robot and the pose of the tail end of the flexible robot.
Further as a preferred embodiment of the method, the pre-constructed spatial optimization model is specifically a spatial optimization model established for a task target of the flexible robot crossing the slit, and includes:
and establishing an operation space optimization model under multiple constraints of an arm type and a joint angle based on minimization of the terminal pose difference of the flexible robot, the direction of an arm segment in the slit is perpendicular to a normal vector of the slit as far as possible, and the condition of actual joint angle limitation is considered.
Referring to fig. 4, in a scene where the flexible robot performs a task by crossing a slit, due to the influence of the deformation of the rope, the via hole friction between the rope and the rigid arm, the limitation of the conventional measurement means, and the like, the control accuracy of the tail end and the arm type of the flexible robot is not high, and the task of crossing the slit is difficult to complete. And (3) combining a configuration measurement method of multi-view vision, acquiring configuration parameters of the robot in real time, and optimizing the operation space track of the flexible robot.
According to the multi-view fusion measurement result, the position vectors of the centers of any two circular cross sections of the controlled arm section can be obtained as follows:
suppose that the normal vector of the controlled arm section passing through the narrow space plane isBased on the requirements of minimum tracking error of the tail end of the robot and minimum parameter error of the arm section in the slit and the limitation of the joint angle of the flexible robot, an operation space optimization model under the multiple constraints of the arm type and the joint angle can be established:
wherein k ispAndweight coefficients, θ, representing the deviation of the terminal position and attitude, respectively-And theta+Respectively representing the lower and upper bounds of the joint angle, Pe、Respectively representing the actual position and the attitude of the target relative to the tail end obtained after optimization, Ped、Respectively representing the desired position and attitude of the target relative to the tip,representing the normal vector of the controlled arm segment across the narrow spatial plane.
According to the formula (16), an optimal joint angle can be obtained, and the pose of the tip can be calculated. Further, the tip speed of the flexible robot can be described as:
wherein, KwFor the introduction of the diagonal weight matrix,respectively, as a position error and an attitude error of the robot end.
According to equation (17), the flexible robot joint angular velocity can be expressed as:
wherein,for the gradient mapping matrix, λ ∈ [0,1 ]]Is a weight matrix, I is an identity matrix,represents the Jacobian matrix JgGeneralized inverse matrix of, KpRepresenting a positive number weight matrix, JgA jacobian matrix representing the flexible robot tip to joint.
Referring to fig. 2, an operation space trajectory optimization system of a flexible robot includes:
the image processing module is used for acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
the feature extraction module is used for extracting features of the preprocessed arm lever image to obtain an ellipse parameter;
the kinematic parameter calculation module is used for calculating the end pose and the arm type parameters of the flexible robot based on the ellipse parameters;
the error calculation module is used for calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
the operation space track optimization module is used for obtaining an optimal joint angle according to the pose difference and a pre-constructed space optimization model;
and the threshold value judging module is used for judging that the arm type parameter deviation and the pose difference are smaller than a preset threshold value, and finishing optimization.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
Referring to fig. 3, an operation space trajectory optimization apparatus of a flexible robot includes a base 5, a flexible robot 1, a fixed camera 2, a movable camera 3, and an auxiliary robot arm 4, wherein:
the flexible robot 1 is fixed on a base 5 and is a rope-driven super-redundant mechanical arm, an arm rod of the flexible robot is generally a cylinder, and the radial section of the flexible robot is circular;
the fixed camera 2 is fixed on the base 5 and shoots the flexible robot 1;
the movable camera 3 is fixed at the tail end of the auxiliary mechanical arm 4, and images at different angles can be acquired by adjusting the auxiliary mechanical arm 4.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method for optimizing an operation space track of a flexible robot is characterized by comprising the following steps:
acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
performing feature extraction on the preprocessed arm lever image to obtain an ellipse parameter;
calculating to obtain the tail end pose and arm type parameters of the flexible robot based on the ellipse parameters;
calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
obtaining an optimal joint angle according to the terminal pose difference and a pre-constructed space optimization model;
and judging that the arm type error and the terminal pose difference are smaller than a preset threshold value, and finishing optimization.
2. The method for optimizing the operation space trajectory of the flexible robot according to claim 1, wherein the step of obtaining the arm bar cross-sectional image and preprocessing the image to obtain the preprocessed arm bar image specifically comprises:
performing image extraction on the section of the arm lever based on a plurality of cameras arranged around the flexible robot to obtain an arm lever section image;
and carrying out gray-scale image conversion, image segmentation and image filtering pretreatment on the arm lever section image to obtain a pretreated arm lever image.
3. The method as claimed in claim 2, wherein the step of extracting the features of the preprocessed arm images to obtain the ellipse parameters specifically comprises:
carrying out edge detection on the preprocessed arm lever image to obtain discrete elliptic pixel points;
fitting the discrete elliptic pixel points to obtain rough elliptic fitting parameters;
and processing the rough ellipse fitting parameters based on a pre-constructed optimization model of the multi-objective fusion pose measurement error to obtain accurate ellipse parameters.
4. The method according to claim 3, wherein the step of fitting discrete ellipse pixels to obtain rough ellipse fitting parameters specifically comprises:
representing the imaging of the arm lever on an image plane by a binary quadratic function of the space elliptical arc to obtain an imaging pixel point;
and fitting the imaging pixel points through a nonlinear least square equation to obtain rough ellipse fitting parameters.
5. The method for optimizing the operation space trajectory of the flexible robot according to claim 4, wherein the step of processing the rough ellipse fitting parameters to obtain precise ellipse parameters based on the pre-constructed optimization model with the multi-objective fusion pose measurement error specifically comprises:
unifying the coordinate systems of the cameras to be in the same coordinate system;
constructing an optimization model of multi-view fusion pose measurement errors and optimizing ellipse parameters acquired by each camera;
and solving the optimization model of the multi-objective fusion pose measurement error based on a particle swarm optimization algorithm to obtain accurate elliptical parameters.
6. The method for optimizing the operation space trajectory of the flexible robot according to claim 5, wherein the expression of the optimization model of the multi-view fusion pose measurement error is as follows:
wherein k represents an elliptical arc on the image planeThe k-th pixel point, E represents the fitted ellipse mark,the relative deviation between the end position measured by the i-th camera and the end position measured by the 1 st camera in the 1 st camera system is shown,the relative deviation of the normal vector of the tail end measured by the No. i camera and the normal vector of the tail end measured by the No. 1 camera under the No. 1 camera system is shown,representing an ellipse fitted to the measurement of camera iThe abscissa of the center of the circle is,the abscissa indicating that the ith camera acquires the k discrete pixel point,represents the ordinate of the center of the ellipse after the i-th camera measurement is fitted,the vertical coordinate of the k discrete pixel point acquired by the i camera is shown,represents the minor semi-axis of the ellipse after the i-th camera measurement is fitted,represents the ellipse major semi-axis after the i-th camera measurement is fitted,represents the fitted elliptical bias angle for camera measurement No. i.
7. The method for optimizing the operation space trajectory of the flexible robot according to claim 6, wherein the step of calculating the end pose and the arm type parameter of the flexible robot based on the ellipse parameters specifically comprises:
calculating the position and posture of the center of the section of the arm rod of the flexible robot according to the accurate ellipse parameters;
and calculating and acquiring the central pose of each cylindrical arm segment of the flexible robot based on kinematics, and acquiring the configuration of the flexible robot and the pose of the tail end of the flexible robot.
8. The method according to claim 7, wherein the pre-constructed spatial optimization model is specifically a spatial optimization model established for a task target of the flexible robot to traverse the slit, and comprises:
and establishing an operation space optimization model under multiple constraints of an arm type and a joint angle based on minimization of the terminal pose difference of the flexible robot, the direction of an arm segment in the slit is perpendicular to a normal vector of the slit as far as possible, and the condition of actual joint angle limitation is considered.
9. An operating space trajectory optimization system of a flexible robot, comprising:
the image processing module is used for acquiring an arm rod section image and preprocessing the image to obtain a preprocessed arm rod image;
the feature extraction module is used for extracting features of the preprocessed arm lever image to obtain an ellipse parameter;
the kinematic parameter calculation module is used for calculating the end pose and the arm type parameters of the flexible robot based on the ellipse parameters;
the error calculation module is used for calculating a terminal pose difference and an arm type error according to the terminal pose of the flexible robot, the arm type parameters and the corresponding expected values;
the track optimization module is used for obtaining an optimal joint angle according to the pose difference and a pre-constructed space optimization model;
and the threshold value judging module is used for judging that the arm type parameter deviation and the pose difference are smaller than a preset threshold value, and finishing optimization.
10. The utility model provides an operation space orbit optimization device of flexible robot which characterized in that, includes base, flexible robot, fixed camera, supplementary camera and supplementary arm, wherein:
the flexible robot is fixed on the base and is a rope-driven super-redundant mechanical arm, the arm lever of the flexible robot is generally a cylinder, and the radial section of the flexible robot is circular;
the fixed camera is fixed on the base and shoots the flexible robot;
the movable camera is fixed at the tail end of the auxiliary mechanical arm, and images at different angles can be acquired by adjusting the auxiliary mechanical arm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260593.5A CN112862812B (en) | 2021-03-10 | 2021-03-10 | Method, system and device for optimizing operation space trajectory of flexible robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110260593.5A CN112862812B (en) | 2021-03-10 | 2021-03-10 | Method, system and device for optimizing operation space trajectory of flexible robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862812A true CN112862812A (en) | 2021-05-28 |
CN112862812B CN112862812B (en) | 2023-04-07 |
Family
ID=75993284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110260593.5A Active CN112862812B (en) | 2021-03-10 | 2021-03-10 | Method, system and device for optimizing operation space trajectory of flexible robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862812B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109048890A (en) * | 2018-07-13 | 2018-12-21 | 哈尔滨工业大学(深圳) | Coordination method for controlling trajectory, system, equipment and storage medium based on robot |
US20190184560A1 (en) * | 2017-01-19 | 2019-06-20 | Beijing University Of Technology | A Trajectory Planning Method For Six Degree-of-Freedom Robots Taking Into Account of End Effector Motion Error |
CN110116407A (en) * | 2019-04-26 | 2019-08-13 | 哈尔滨工业大学(深圳) | Flexible robot's pose measuring method and device |
CN110561419A (en) * | 2019-08-09 | 2019-12-13 | 哈尔滨工业大学(深圳) | arm-shaped line constraint flexible robot track planning method and device |
CN110561420A (en) * | 2019-08-09 | 2019-12-13 | 哈尔滨工业大学(深圳) | Arm profile constraint flexible robot track planning method and device |
-
2021
- 2021-03-10 CN CN202110260593.5A patent/CN112862812B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190184560A1 (en) * | 2017-01-19 | 2019-06-20 | Beijing University Of Technology | A Trajectory Planning Method For Six Degree-of-Freedom Robots Taking Into Account of End Effector Motion Error |
CN109048890A (en) * | 2018-07-13 | 2018-12-21 | 哈尔滨工业大学(深圳) | Coordination method for controlling trajectory, system, equipment and storage medium based on robot |
CN110116407A (en) * | 2019-04-26 | 2019-08-13 | 哈尔滨工业大学(深圳) | Flexible robot's pose measuring method and device |
CN110561419A (en) * | 2019-08-09 | 2019-12-13 | 哈尔滨工业大学(深圳) | arm-shaped line constraint flexible robot track planning method and device |
CN110561420A (en) * | 2019-08-09 | 2019-12-13 | 哈尔滨工业大学(深圳) | Arm profile constraint flexible robot track planning method and device |
Non-Patent Citations (1)
Title |
---|
JIANQING PENG ET AL: "Dynamic modeling and trajectory tracking control method of segmented linkage cable-driven hyper-redundant robot", 《NONLINEAR DYNAMICS》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112862812B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111735479B (en) | Multi-sensor combined calibration device and method | |
CN110116407B (en) | Flexible robot position and posture measuring method and device | |
CN112476434B (en) | Visual 3D pick-and-place method and system based on cooperative robot | |
Jiang et al. | An overview of hand-eye calibration | |
CN109270534B (en) | Intelligent vehicle laser sensor and camera online calibration method | |
CN107741234B (en) | Off-line map construction and positioning method based on vision | |
CN111775146A (en) | Visual alignment method under industrial mechanical arm multi-station operation | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
CN112184812B (en) | Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system | |
CN102722697A (en) | Unmanned aerial vehicle autonomous navigation landing visual target tracking method | |
CN113205603A (en) | Three-dimensional point cloud splicing reconstruction method based on rotating platform | |
Xu et al. | Vision-based simultaneous measurement of manipulator configuration and target pose for an intelligent cable-driven robot | |
CN112658643B (en) | Connector assembly method | |
CN113221953A (en) | Target attitude identification system and method based on example segmentation and binocular depth estimation | |
CN112588621B (en) | Agricultural product sorting method and system based on visual servo | |
CN114581632A (en) | Method, equipment and device for detecting assembly error of part based on augmented reality technology | |
CN117340879A (en) | Industrial machine ginseng number identification method and system based on graph optimization model | |
CN112862812B (en) | Method, system and device for optimizing operation space trajectory of flexible robot | |
CN112288801A (en) | Four-in-one self-adaptive tracking shooting method and device applied to inspection robot | |
CN111609794A (en) | Target satellite and rocket docking ring capturing point positioning method based on capturing of two mechanical arms | |
CN113240751B (en) | Calibration method for robot tail end camera | |
Zhang et al. | Automatic Extrinsic Parameter Calibration for Camera-LiDAR Fusion using Spherical Target | |
Liu et al. | An image-based accurate alignment for substation inspection robot | |
Liang et al. | An integrated camera parameters calibration approach for robotic monocular vision guidance | |
Yang et al. | A novel multi-camera differential binocular vision sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |