WO2018209862A1 - 位姿误差修正方法及装置、机器人及存储介质 - Google Patents

位姿误差修正方法及装置、机器人及存储介质 Download PDF

Info

Publication number
WO2018209862A1
WO2018209862A1 PCT/CN2017/103263 CN2017103263W WO2018209862A1 WO 2018209862 A1 WO2018209862 A1 WO 2018209862A1 CN 2017103263 W CN2017103263 W CN 2017103263W WO 2018209862 A1 WO2018209862 A1 WO 2018209862A1
Authority
WO
WIPO (PCT)
Prior art keywords
pose
state
mean square
minimum mean
square error
Prior art date
Application number
PCT/CN2017/103263
Other languages
English (en)
French (fr)
Inventor
阳方平
Original Assignee
广州视源电子科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018209862A1 publication Critical patent/WO2018209862A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40002Camera, robot follows direction movement of operator head, helmet, headstick
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40005Vision, analyse image at one station during manipulation at next station
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40519Motion, trajectory planning

Definitions

  • the present invention relates to the field of robot control technologies, and in particular, to a pose error correction method and apparatus, a robot, and a storage medium.
  • the robotic arm is a mechatronic device that mimics the functions of the human arm, wrist, and hand. It can move any object or tool according to the time-varying requirements of spatial pose (position and attitude) to complete the requirements of an industrial production operation.
  • a robotic arm and a fixed-position image capture device are usually combined to take a picture and detect the outer surface of an object or tool. For example, using a mechanical arm to grasp the board to be detected, and moving the card to the shooting range of the camera lens, the control arm moves within the shooting range to ensure that the camera performs multiple shots on the solder surface of the card. And to ensure that the photos taken multiple times can be combined into a complete picture of the solder surface of the board to complete the inspection of the solder surface of the board.
  • the robot arm and image acquisition device are controlled by the same controller.
  • the image acquisition device is triggered by the controller to take a picture, and at the same time, the end posture of the arm is determined, and the pose of the object to be tested is estimated according to the relative posture relationship between the robot arm and the object to be measured, and is estimated according to each photo.
  • the pose combines a photo containing the entire subject of the subject to be tested. Because the controller triggers the image acquisition device to take a photo to the image capture device, there is a delay between the photos taken, so that the determined end position of the robot arm has an error, thereby affecting the accuracy of the pose of the object to be tested in the photo and after synthesis. The quality and accuracy of the photo.
  • the embodiment of the present invention provides a method and device for correcting pose error, a robot and a storage medium, so as to solve the problem of low pose accuracy of the object to be tested in the photo due to the error of the end position of the arm and the photo of the synthesized photo.
  • an embodiment of the present invention provides a method for correcting a pose error, including:
  • the photographing function is triggered at each sampling time, and the pose state predicted value of the robot arm target position and the pose state minimum mean square error are obtained, wherein the photographed target object and the robot arm target position are relatively fixed in posture;
  • the shooting area pose of the target object in the image captured at the corresponding sampling time is determined according to the corrected predicted value and the corrected minimum mean square error.
  • the embodiment of the present invention further provides a pose error correction apparatus, including:
  • a parameter acquisition module configured to trigger a photographing function at each sampling moment, and obtain a pose state predicted value of the robot arm target position and a pose state minimum mean square error, wherein the photographed target object and the robot arm target position The posture is relatively fixed;
  • a correction module configured to correct the pose state predictive value and the pose state minimum mean square error to obtain a corrected predicted value and a corrected minimum mean square error
  • the pose determining module is configured to determine a shooting region pose of the target object in the image captured at the corresponding sampling moment according to the corrected predicted value and the corrected minimum mean square error.
  • an embodiment of the present invention further provides a robot, including:
  • One or more processors are One or more processors;
  • a storage device for storing one or more programs
  • Image acquisition device for taking an image
  • the one or more programs are executed by the one or more processors such that the one or more processors implement the pose error correction method as described in the first aspect.
  • an embodiment of the present invention further provides a storage medium including computer executable instructions, when executed by a computer processor, for performing a pose error correction method according to the first aspect .
  • the pose error correction method and device, the robot and the storage medium provided by the embodiments of the present invention trigger the photographing function at each sampling moment and acquire the posture state predictive value of the corresponding robot arm target position and the minimum uniform error of the pose state. Correcting the obtained pose state predictive value and the pose state minimum sharing error to determine the technical means of the shooting region pose of the target object in the image captured at the corresponding sampling time according to the corrected result, thereby avoiding the shooting delay.
  • the phenomenon of obtaining the position and orientation data of the target position of the manipulator caused by the error improves the accuracy of determining the pose of the target area in the captured image, thereby ensuring the accuracy of the synthesis of each captured image, so as to more accurately follow up.
  • FIG. 1a is a flowchart of a method for correcting a pose error according to Embodiment 1 of the present invention
  • FIG. 1b is a schematic structural diagram of a pose error correction system according to Embodiment 1 of the present invention.
  • FIG. 2a is a flowchart of a method for correcting a pose error according to Embodiment 2 of the present invention
  • FIG. 2b is a flowchart of a method for determining a pose state parameter according to Embodiment 2 of the present invention
  • FIG. 2c is a flowchart of a method for correcting a pose state parameter according to Embodiment 2 of the present invention.
  • FIG. 3 is a schematic structural diagram of a pose error correction apparatus according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic structural diagram of a robot according to Embodiment 4 of the present invention.
  • FIG. 1a is a flowchart of a method for correcting a pose error according to Embodiment 1 of the present invention.
  • the pose error correction method provided in this embodiment is suitable for the case where the detection surface of the target object is photographed and detected by the robot arm and the image acquisition device.
  • the pose error correction method provided in this embodiment may be performed by a pose error correction device, which may be implemented by software and/or hardware and integrated in a robot for pose error correction.
  • the robot refers to a machine device that can automatically perform work. It can accept human command, run pre-programmed procedures, or act on principles that are based on artificial intelligence techniques. For example, mobile forklifts and equipment with robotic arms are all robots.
  • FIG. 1b is a schematic structural diagram of a pose error correction system, which specifically includes a robot 1 and a target object 2 for pose error correction.
  • the robot 1 for pose error correction includes an image pickup device 11, a robot arm 12, and a controller 13, and the controller 13 includes pose error correction means.
  • Image acquisition device 11 position After the fixing, the arm 12 is generally not moved, and the arm 12 is moved within the image capturing range of the image capturing device 11, that is, the position of the image capturing device 11 and the base (base) of the robot arm 12 is relatively fixed.
  • the image capturing device 11 It can be a device such as a camera or a camera.
  • the target object 2 to be photographed is relatively fixed to the target position of the robot arm 12, and can also be described as a relative posture of the target object 2 and the target position of the robot arm 12.
  • the above can be understood as placing the target object 2 on the robot arm 12.
  • the target position is preferably the end of the robot arm 12.
  • the image pickup device 11 can photograph the target object 2 at the end of the robot arm 12.
  • the pose error correction method provided in this embodiment specifically includes:
  • S110 trigger a shooting function at each sampling time, and obtain a posture state predicted value of the target position of the robot arm and a minimum mean square error of the pose state.
  • the robot arm is composed of a series of movable joints, and the target object is generally disposed on the sub-mechanical arm corresponding to one of the joints or the end of the arm, it is preferably disposed at the end of the arm. Accordingly, the set position is referred to as a target position.
  • the target object and the target position of the robot arm are relatively fixed.
  • the pose error correction is mainly for correcting the pose error of the target position where the target object is located. Therefore, when acquiring the pose state parameter of the robot arm, only the pose state parameter of the target position needs to be acquired.
  • the sampling interval is preset, and the image capturing device is triggered to perform image acquisition at the sampling moment corresponding to the sampling interval, and the pose state parameter of the target position of the robot arm is acquired.
  • the pose state parameter preferably includes: a pose state predictor value and a pose state minimum mean square error.
  • the posture state prediction value is a state that the parameters such as the pose, the speed, and the like of the target position at the current sampling time are predicted.
  • the minimum mean square error of the pose state is a parameter related to the corresponding error value when the predicted pose state is predicted. According to the pose state, the minimum mean square error can reduce the error value of the final result and improve the accuracy.
  • the pose state predictive value and the pose state minimum mean square error are all matrices.
  • it may be to obtain various parameters such as the running parameter or the system parameter, and calculate the pose state parameter by calculating various types of parameters.
  • the specific calculation rule is not limited in this embodiment.
  • the state matrix of the robot arm target position can be expressed as:
  • t is the current sampling time
  • X(t) is the predicted state of the pose state
  • S(t) is the pose of the target position of the arm at time t, which is generally a 6-dimensional vector
  • V(t) is the machine at time t.
  • the speed of the arm target position is generally a 6-dimensional vector.
  • T is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are sampling intervals (sampling periods), which are represented by ⁇ T.
  • T 1 is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are all
  • T 2 is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are both ⁇ T.
  • T and T 2 are the same matrix
  • a(t) is the desired acceleration of the target position of the arm at time t. It can be determined by motion planning, X(t-1) predicts the pose state predictor for the last sampling moment, and W(t) denotes the model noise, which generally satisfies the Gaussian distribution and is a 12-dimensional vector.
  • the minimum mean square error of the pose state at the current sampling moment may be determined according to the minimum mean square error of the pose state at the last sampling moment, thereby determining the minimum pose state
  • the variation law of the square error, and the minimum mean square error of the pose state at each sampling moment is determined according to the variation law.
  • the predicted state of the pose state and the minimum mean square error of the pose state are corrected, and the corrected predicted value and the corrected minimum mean square error of the current time are obtained. Therefore, when determining the pose state prediction value and the pose state minimum mean square error at the next sampling time, it is preferable to use the corrected prediction value obtained at the current sampling timing and the corrected minimum mean square error.
  • the pose state predicted value and the pose state minimum mean square error obtained when the shooting function is triggered and the actual bit position of the arm target position when the image is actually captured
  • the error is more in line with the actual pose state predicted value and the actual pose state minimum mean square error.
  • the method for correcting the pose state predictive value and the pose state minimum mean square error may be: calculating a minimum mean square error of the pose state to obtain an error gain based on a minimum mean square error of the current pose state, and The pose state predictive value and the pose state minimum mean square error are corrected according to the error gain.
  • the specific calculation method of the error gain is not limited in this embodiment.
  • the image capturing device captures an image
  • only a partial area of the object to be photographed of the target object is usually photographed, wherein the partial area is referred to as a photographing area.
  • the shooting area presented in the captured image is an image obtained by enlarging a partial area actually photographed.
  • the shooting area pose is a pose of the actual corresponding partial area of the shooting area in the actual coordinate system, wherein the actual coordinate system is the same as the coordinate system adopted by the mechanical arm. According to the position and orientation of the shooting area determined at each sampling moment, the actual pose of the shooting area in the corresponding image can be determined, and then the captured images are stitched and combined to obtain a complete image of the target object's image, so as to subsequently image the image. Take action.
  • the posture of the target position of the arm can be determined according to the corrected predicted value and the corrected minimum mean square error, and the pose of the current shooting area is estimated according to the pose of the target position of the arm.
  • the posture of the target position of the robot arm is adjusted to ensure that the image capturing device captures the other shooting regions of the target object at the next sampling time. Until the image of the entire shooting area of the current subject of the target object is obtained.
  • the technical solution provided by the embodiment by triggering the photographing function at each sampling time and acquiring the pose state predictive value of the corresponding robot arm target position and the pose-state state minimum sharing error, the obtained pose state predictive value and the bit
  • the minimum mean shift error of the pose state is corrected to determine the technical means of the pose of the target object in the image taken at the corresponding sampling time according to the corrected result, and the position and orientation data of the target position of the robot arm due to the shooting delay is avoided.
  • There is a phenomenon of error which improves the accuracy of determining the pose of the shooting area of the target object in the captured image, thereby ensuring the accuracy of the synthesis of each captured image, so as to perform subsequent processing more accurately.
  • acquiring the posture state predicted value of the target position of the robot arm and the minimum mean square error of the pose state specifically includes: acquiring a measurement parameter and a state parameter of the target position of the arm at the corresponding sampling time; and determining a posture state predicted value according to the state parameter and Pose state minimum mean square error.
  • correcting the pose state predictive value and the pose state minimum mean square error to obtain the corrected predicted value and the corrected minimum mean square error comprises: determining the error gain according to the pose state minimum mean square error and the measurement parameter; The error gain, measurement parameters, and state parameters correct the pose state predictive value and the pose state minimum mean square error.
  • the following specifically includes: performing motion planning on the target position of the arm to each At the sampling moment, the motion parameters of the target position pose are determined according to the motion planning result.
  • the pose error correction method provided by this embodiment specifically includes:
  • S210 Perform motion planning on the target position of the robot arm to determine a motion parameter of the target position pose according to the motion planning result at each sampling moment.
  • the motion planning is to perform motion planning on each running time of the robot arm target position to determine the motion parameters that the robot arm target position is expected to reach at each running time.
  • the motion parameters include at least one of a desired position, velocity, and acceleration.
  • the specific motion planning method is not limited in this embodiment. The following describes the process of motion planning only by the fifth-order polynomial:
  • the five-time polynomial method of motion planning can be expressed as:
  • t is the current sampling time of the moving object (in this embodiment, the robot arm target position)
  • S(t) is the time t. The results of the work plan.
  • the initial motion parameters are set to include: the initial moment target position ⁇ 0 of the robot arm target position, and the initial moment target speed of the robot arm target position Initial moment target acceleration of the target position of the arm.
  • ⁇ 1 (t) a 0 +a 1 t+a 2 t 2 +a 3 t 3 +a 4 t 4 +a 5 t 5 (4)
  • Equations (4), (5), and (6) are motion planning formulas of the constructed target position of the robot arm. According to the above motion planning formula, the motion parameters expected to be achieved at any running time of the target position of the arm can be obtained. It should be noted that, in the actual process, at least one motion planning formula may be selectively constructed in the equations (4), (5), and (6) according to actual conditions.
  • the motion parameter belongs to a state parameter
  • the state parameter is a parameter used to determine a pose state parameter.
  • S220 Trigger a shooting function at each sampling moment, and obtain a measurement parameter and a state parameter of a target position of the robot arm corresponding to the sampling moment.
  • the measurement parameter is a parameter for obtaining a position measurement value of the target position of the robot arm. It should be noted that when determining the position measurement value of the target position of the manipulator, not only the measurement parameters but also the predicted value of the posture position state of the manipulator target position is required, that is, the predicted position value of the manipulator target position position is obtained to obtain the manipulator target position posture. Measurements.
  • the measurement equation for determining the position measurement value of the target position of the robot arm is specifically:
  • H is a preset observation vector, which is a 12 ⁇ 12 unit square matrix. The observation vector is generally determined at device initialization and remains unchanged during subsequent processes.
  • X(t) is the predicted value of the pose state at time t, and D(t) represents the measurement noise, which generally satisfies the Gaussian distribution and is a 12-dimensional vector.
  • the pose value measurement of the target position of the arm at the current sampling time should be considered to ensure the accuracy of the result.
  • the pose state prediction value not including the model noise and the pose state minimum mean square error related to the model noise are respectively determined when determining the pose state parameter.
  • the state parameter includes at least: a modified minimum mean square error of the previous sampling time, a corrected predicted value of the previous sampling time, a sampling period, a motion parameter of the target position and posture of the current sampling time, and a preset first covariance matrix, and the like.
  • the corrected minimum mean square error is a final error parameter obtained by correcting the minimum mean square error of the pose state.
  • the corrected predicted value is the final predicted value obtained by correcting the predicted state of the pose state.
  • the preset first covariance matrix is a covariance matrix of a preset model noise, which has an invariance.
  • the motion parameter of the target position pose can be determined by equation (4), equation (5) or equation (6).
  • the motion parameter is the acceleration that is desired to be achieved, which can be determined by equation (6).
  • the step when determining the pose state predictive value and the pose state minimum mean square error, the step specifically includes:
  • Equation (2) gives a general representation of the predicted value of the pose state.
  • the predicted pose state without the model noise can be derived, which is specifically expressed as:
  • t-1) represents the predicted state of the pose state obtained at time t
  • I is a 6 ⁇ 6 unit diagonal matrix
  • T is a 6 ⁇ 6 diagonal matrix
  • elements on the diagonal are sampling intervals (sampling periods), which are represented by ⁇ T.
  • T 1 is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are all
  • T 2 is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are both ⁇ T.
  • T and T 2 are the same matrix
  • a(t) is the desired acceleration of the target position of the arm at time t.
  • the predicted state of the pose state determined in this step does not include the error parameter of the current sampling time, and therefore the predicted state of the pose state determined in this step is not the final predicted value.
  • the minimum mean square error of the pose state is specifically:
  • t-1) represents the minimum mean square error of the pose state obtained at time t
  • I is a 6 ⁇ 6 unit diagonal matrix
  • T is a 6 ⁇ 6 diagonal matrix
  • elements on the diagonal are sampling intervals (sampling period), represented by ⁇ T
  • Q is a preset first covariance matrix.
  • t-1) is the corrected minimum mean square error of the last sampling instant (t-1).
  • the minimum mean square error of the pose state determined in this step is an uncorrected value, and is not a final determined error parameter.
  • the error gain can reflect the effect of the currently obtained pose state minimum mean square error on the pose measurement.
  • the measurement parameters include at least: a preset observation vector and a preset second covariance matrix.
  • the preset observation vector is a preset 12 ⁇ 12 unit square matrix.
  • the preset second covariance matrix is a covariance matrix of preset measurement noise, which has invariance.
  • the error gain is specifically:
  • K(t) represents the error gain obtained at time t
  • t-1) represents the minimum mean square error of the pose state obtained at time t
  • H is the preset observation vector
  • R is the preset second covariance matrix
  • the attitude state parameter is corrected according to the error gain determined above in combination with the measurement parameter and the state parameter to ensure the accuracy of the corrected pose state parameter.
  • the method used for correcting the pose state parameter may include:
  • the formula (7) shows the representation of the pose measurement value, and the pose measurement value is directly determined by using the formula (7) in this step, wherein the calculation method of D(t) is not limited in this embodiment, generally The specific value of D(t) depends on the preset second covariance matrix and the current sampling instant.
  • the specific correction formula is as follows:
  • t-1) represents the predicted state of the pose state obtained at time t
  • I is a 6 ⁇ 6 unit diagonal matrix
  • T is a 6 ⁇ 6 diagonal matrix
  • the elements on the diagonal are sampling intervals (sampling period), expressed by ⁇ T
  • K(t) is the error obtained at time t.
  • Gain H is the preset observation vector
  • Z(t) is the pose measurement of the target position of the arm at time t.
  • the specific correction formula is as follows:
  • t) represents the corrected minimum mean square error obtained at time t
  • I is a unit diagonal matrix of 6 ⁇ 6
  • K(t) represents the error gain obtained at time t
  • H is the preset observation vector
  • t-1) represents the minimum mean square error of the pose state obtained at time t.
  • the shooting operation is stopped, and the captured image is subjected to subsequent processing according to the shooting position pose obtained at each sampling time. If the shooting operation is not completed, the shooting operation is continued at the next sampling time, and the correction is predicted according to the current adopted time.
  • the value and the corrected minimum mean square error determine the corrected predicted value and the corrected minimum mean square error for the next sampling instant until the shooting operation is completed.
  • the initial value of the corrected predicted value and the initial value of the corrected minimum mean square error are determined at the time of initialization, and the initial value of the corrected predicted value is directly used when determining the corrected predicted value and the corrected minimum mean square error at the subsequent first sampling time. Correct the initial value of the minimum mean square error.
  • the image acquisition device is a camera. Face the solder side of the board toward the camera so that the camera shoots the solder side. At each sampling time, the solder surface of the board is photographed, and only a part of the solder surface can be photographed at a time.
  • the Kalman filter is used to determine the corrected predicted value and the corrected minimum mean square error.
  • the device is initialized, and the preset first covariance matrix, the preset second covariance matrix, the preset observation vector, and the sampling interval (for example, 1 ms) are determined.
  • the initial value of the corrected minimum mean square error is P(0
  • an initial motion parameter is set to determine a motion planning equation for the end position of the arm, which is specifically Equation (6).
  • the photographing operation is triggered at each sampling time according to the sampling interval, the pose state predictive value corresponding to the sampling moment is determined according to formula (8), and the pose state minimum mean square error corresponding to the sampling moment is determined according to formula (9).
  • the error gain of the corresponding sampling instant is determined according to equation (10).
  • the position and velocity of the joint corresponding to the end position of the current arm are measured to determine the pose measurement of the end position of the arm according to the positive kinematics of the arm.
  • the corrected predicted value corresponding to the sampling instant is determined according to equation (11), and the corrected minimum mean square error corresponding to the sampling instant is determined according to equation (12).
  • the photographs are combined according to the posture of the photographing area of the solder surface of the card in each photo to obtain a photograph including the complete solder surface of the card, and the solder surface of the card is detected according to the synthesized photograph. To determine whether the board is qualified according to the solder surface.
  • the technical solution provided by the embodiment obtains the measurement parameter and the state parameter of the target position of the arm when the shooting function is triggered at each sampling time, and determines the posture state predictive value and the minimum mean square error of the pose state according to the state parameter, and further The error gain is determined according to the minimum mean square error of the pose state and the measurement parameters, so as to repair the pose state predictive value and the pose state minimum mean square error according to the error gain. Positive, the accuracy of the corrected correction value and the corrected minimum mean square error are guaranteed, and the phenomenon that the position and orientation data of the target position of the robot arm is obtained due to the shooting delay is avoided, and the shooting of the target object in the determined image is improved.
  • the accuracy of the regional pose ensures the accuracy of the synthesis of each captured image for more accurate subsequent processing.
  • FIG. 3 is a schematic structural diagram of a pose error correction apparatus according to Embodiment 3 of the present invention.
  • the pose error correction apparatus provided in this embodiment specifically includes: a parameter acquisition module 301, a correction module 302, and a pose determination module 303.
  • the parameter obtaining module 301 is configured to trigger a shooting function at each sampling moment, and obtain a posture state predicted value of the target position of the robot arm and a minimum mean square error of the pose state, wherein the target object and the target position of the robot arm are captured.
  • the posture is relatively fixed;
  • the correction module 302 is configured to correct the pose state predictive value and the pose state minimum mean square error to obtain the corrected predicted value and the corrected minimum mean square error;
  • the pose determining module 303 is configured to The corrected predicted value and the corrected minimum mean square error determine the shooting area pose of the target object in the image captured at the corresponding sampling time.
  • the technical solution provided by the embodiment by triggering the photographing function at each sampling time and acquiring the pose state predictive value of the corresponding robot arm target position and the pose-state state minimum sharing error, the obtained pose state predictive value and the bit
  • the minimum mean shift error of the pose state is corrected to determine the technical means of the pose of the target object in the image taken at the corresponding sampling time according to the corrected result, and the position and orientation data of the target position of the robot arm due to the shooting delay is avoided.
  • There is a phenomenon of error which improves the accuracy of determining the pose of the shooting area of the target object in the captured image, thereby ensuring the accuracy of the synthesis of each captured image, so as to perform subsequent processing more accurately.
  • the parameter obtaining module 301 specifically includes: a shooting trigger unit, The shooting function is triggered at each sampling time; the position parameter acquiring unit is configured to acquire the measuring parameter and the state parameter of the target position of the manipulator corresponding to the sampling time; the parameter determining unit is configured to determine the posture state predicted value and the posture according to the state parameter The state minimum mean square error, wherein the target object of the shooting is relatively fixed with the target position of the robot arm.
  • the correction module 302 specifically includes: an error gain determining unit, configured to determine an error gain according to a minimum mean square error of the pose state and a measurement parameter; and a parameter correction unit configured to perform a pose state according to the error gain, the measurement parameter, and the state parameter The predicted value and the minimum mean square error of the pose state are corrected.
  • the method further includes: a motion planning module, configured to trigger a photographing function at each sampling moment, and obtain a posture state predictive value of the target position of the robot arm and a minimum mean square error of the pose state before the mechanical
  • the arm target position is subjected to motion planning to determine a motion parameter of the target position pose according to the motion planning result at each sampling moment, and the motion parameter belongs to the state parameter.
  • the state parameters include: a modified minimum mean square error at the last sampling time, a corrected predicted value at the last sampling time, a sampling period, a motion parameter of the target position and posture at the current sampling time, and a preset first Covariance matrix.
  • the parameter determining unit specifically includes: a predicted value determining subunit, configured to determine a pose state prediction according to the sampling period, the corrected predicted value of the previous sampling time, and the motion parameter of the target position and posture of the current sampling time.
  • the value determining unit is configured to determine a minimum mean square error of the pose state according to the sampling period, the corrected minimum mean square error of the last sampling moment, and the preset first covariance matrix.
  • the measurement parameters include: a preset observation vector and a preset second covariance matrix.
  • the parameter correction unit specifically includes: a measurement value determining subunit, configured to determine the pose measurement value according to the pose state predicted value, the preset observation vector, and the preset second covariance matrix;
  • the prediction value correction unit is configured to correct the posture state prediction value according to the sampling period, the error gain, the preset observation vector, and the pose measurement value to obtain the corrected prediction value;
  • the error correction unit is configured to use the error gain and the preset The observation vector corrects the minimum mean square error of the pose state to obtain the corrected minimum mean square error.
  • the pose error correction device provided in this embodiment is applicable to the pose error correction method provided by any of the above embodiments, and has corresponding functions and beneficial effects.
  • FIG. 4 is a schematic structural diagram of a robot according to Embodiment 4 of the present invention.
  • the robot includes a processor 40, a storage device 41, an image capture device 42, a robot arm 43, an input device 44, and an output device 45.
  • the number of processors 40 in the robot may be one or more, FIG.
  • One processor 40 is taken as an example; the processor 40, the storage device 41, the image acquisition device 42, the robot arm 43, the input device 44, and the output device 45 in the robot can be connected by a bus or other means, in FIG. Connection is an example.
  • the image capturing device 42 is configured to capture an image
  • the robot arm 43 is configured to fix the target object.
  • the storage device 41 is a computer readable storage medium for storing one or more programs, such as program instructions/modules corresponding to the pose error correction method in the embodiment of the present invention (for example, parameters in the pose error correction device)
  • the processor 40 executes various functional applications and data processing of the robot by executing software programs, instructions, and modules stored in the storage device 41, that is, implementing the above-described pose error correction.
  • the controller of FIG. 1b includes: a processor 40 and a storage device 41, optionally including an input device 44 and an output device 45.
  • the storage device 41 can mainly include a storage program area and a storage data area, wherein the storage program area can be saved.
  • the operating system at least one function required application; the storage data area can store data created according to the use of the robot, and the like.
  • the storage device 41 may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, or other nonvolatile solid state storage device.
  • storage device 41 may further include memory remotely located relative to processor 40, which may be connected to the robot via a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 44 can be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the robot.
  • the output device 45 may include a display device such as a display screen.
  • Embodiment 5 of the present invention further provides a storage medium including computer executable instructions for performing a pose error correction method when executed by a computer processor, the method comprising:
  • the shooting function is triggered at each sampling time, and the posture state prediction value of the target position of the robot arm and the minimum mean square error of the posture state are obtained, wherein the target position of the captured target object and the target position of the robot arm are relatively fixed;
  • the shooting area pose of the target object in the image captured at the corresponding sampling time is determined according to the corrected predicted value and the corrected minimum mean square error.
  • the computer executable instructions are not limited to the posture error correction method operation as described above, and may also perform the bits provided by any embodiment of the present invention. Related operations in the pose error correction method.
  • the present invention can be implemented by software and necessary general hardware, and can also be implemented by hardware, but in many cases, the former is a better implementation. .
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk of a computer.
  • ROM read-only memory
  • RAM random access memory
  • FLASH flash memory
  • hard disk or optical disk etc.
  • a computer device can be a robot, A personal computer, a server, or a network device, etc., performs the pose error correction method described in various embodiments of the present invention.
  • each unit and module included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented;
  • the specific names of the functional units are also only for convenience of distinguishing from each other, and are not intended to limit the scope of protection of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

一种位姿误差修正方法及装置、机器人及存储介质。该方法具体包括:在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与机械臂目标位置的位姿相对固定;对位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿。采用上述方法可以解决由于机械臂末端位姿误差导致照片中待测对象的位姿精确度低以及合成后照片的质量低和准确度低的技术问题。

Description

位姿误差修正方法及装置、机器人及存储介质 技术领域
本发明涉及机器人控制技术领域,尤其涉及一种位姿误差修正方法及装置、机器人及存储介质。
背景技术
机械臂是一种模拟人手臂、手腕以及手功能的机械电子装置。它可以将任一物件或工具按空间位姿(位置和姿态)的时变要求进行移动,从而完成某一工业生产的作业要求。
现有工业生产中,通常将机械臂和位置固定的图像采集装置组合,用于对某个物件或工具的外面表进行拍照并检测。例如,利用机械臂抓取待检测的板卡,并将该板卡移动至相机镜头的拍摄范围内,控制机械臂在拍摄范围内移动,以保证相机对板卡的焊锡面进行多次拍摄,并保证多次拍摄的照片可以合成一张完整的板卡焊锡面照片,以完成对板卡的焊锡面进行检测。
一般而言,机械臂和图像采集装置由同一控制器进行控制。由控制器触发图像采集装置进行拍照,同时确定机械臂末端位姿,根据机械臂和待测对象的相对位姿关系推测出拍摄照片中待测对象的位姿,以根据每张照片推测出的位姿合成一张包含待测对象完整被摄面的照片。由于控制器触发图像采集装置进行拍照到图像采集装置实际拍摄的照片之间存在时延,使得确定的机械臂末端位姿存在误差,进而影响了照片中待测对象位姿的精确度以及合成后照片的质量和准确度。
发明内容
有鉴于此,本发明实施例提供一种位姿误差修正方法及装置、机器人及存储介质,以解决由于机械臂末端位姿误差导致照片中待测对象的位姿精确度低以及合成后照片的质量低和准确度低的技术问题。
第一方面,本发明实施例提供了一种位姿误差修正方法,包括:
在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与所述机械臂目标位置的位姿相对固定;
对所述位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;
根据所述修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中所述目标对象的拍摄区域位姿。
第二方面,本发明实施例还提供了一种位姿误差修正装置,包括:
参数获取模块,用于在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与所述机械臂目标位置的位姿相对固定;
修正模块,用于对所述位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;
位姿确定模块,用于根据所述修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中所述目标对象的拍摄区域位姿。
第三方面,本发明实施例还提供了一种机器人,包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
图像采集装置,用于拍摄图像;
机械臂,用于固定目标对象;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面中所述的位姿误差修正方法。
第四方面,本发明实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如第一方面所述的位姿误差修正方法。
本发明实施例提供的位姿误差修正方法及装置、机器人及存储介质,通过在每个采样时刻触发拍摄功能并获取对应的机械臂目标位置的位姿状态预测值以及位姿状态最小均分误差,对得到的位姿状态预测值以及位姿状态最小均分误差进行修正,以根据修正后的结果确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿的技术手段,避免了由于拍摄延时导致的获取机械臂目标位置位姿数据存在误差的现象,提高了确定拍摄图像中目标对象的拍摄区域位姿的准确性,进而保证了各拍摄图像合成时的准确度,以便于更准确地进行后续处理。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:
图1a为本发明实施例一提供的一种位姿误差修正方法的流程图;
图1b为本发明实施例一提供的一种位姿误差修正系统的结构示意图;
图2a为本发明实施例二提供的一种位姿误差修正方法的流程图;
图2b为本发明实施例二提供的一种位姿状态参数确定方法的流程图;
图2c为本发明实施例二提供的一种位姿状态参数修正方法的流程图;
图3为本发明实施例三提供的一种位姿误差修正装置的结构示意图;
图4为本发明实施例四提供的一种机器人的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。
实施例一
图1a为本发明实施例一提供的一种位姿误差修正方法的流程图。本实施例提供的位姿误差修正方法适用于利用机械臂和图像采集装置对目标对象的检测面进行拍摄检测的情况。本实施例提供的位姿误差修正方法可以由位姿误差修正装置执行,该位姿误差修正装置可以通过软件和/或硬件的方式实现,并集成在用于位姿误差修正的机器人中。其中,机器人是指可以自动执行工作的机器装置。它既可以接受人类指挥,又可以运行预先编排的程序,也可以根据以人工智能技术制定的原则纲领行动。例如,移动叉举车以及带有机械臂的设备等均属于机器人。
为了便于对本实施例提供的位姿误差修正方法进行描述,参考图1b,其为一种位姿误差修正系统的结构示意图,其具体包括用于位姿误差修正的机器人1和目标对象2。用于位姿误差修正的机器人1包括:图像采集装置11、机械臂12以及控制器13,控制器13中包括位姿误差修正装置。图像采集装置11位置 固定后一般不会改变,机械臂12在图像采集装置11的可拍摄范围内移动,即图像采集装置11与机械臂12的基座(底座)的位姿相对固定,优选的,图像采集装置11可以为相机或摄像头等设备。一般而言,拍摄的目标对象2与机械臂12目标位置相对固定,也可以描述为目标对象2与机械臂12目标位置的位姿相对固定,上述可以理解为将目标对象2放置于机械臂12的目标位置上,且随着机械臂12的移动在图像采集装置11的可拍摄范围内移动。目标位置优选为机械臂12末端,此时,图像采集装置11可以对机械臂12末端的目标对象2进行拍摄。可以理解的是,上述位姿误差修正系统并非对本实施例的限定。下面结合图1b对本实施例提供的位姿误差修正方法进行描述。
参考图1a,本实施例提供的位姿误差修正方法具体包括:
S110、在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差。
由于机械臂是由一系列可移动关节组成,而目标对象一般设置在其中某个关节对应的子机械臂上或者机械臂末端的位置上,优选设置在机械臂末端的位置上。据此,将设置位置称为目标位置。一般而言,目标对象设置在机械臂的目标位置后,目标对象与机械臂目标位置的位姿相对固定。本实施例中,位姿误差修正主要是针对目标对象所在目标位置的位姿误差进行修正,因此,在获取机械臂的位姿状态参数时,仅需要获取目标位置的位姿状态参数即可。
具体的,预先设定采样间隔,并在该采样间隔对应的采样时刻下触发图像采集装置进行图像采集,并获取机械臂目标位置的位姿状态参数。位姿状态参数优选包括:位姿状态预测值和位姿状态最小均方误差。其中,位姿状态预测值为预测得到的当前采样时刻目标位置的位姿、速度等参数应有的状态。位姿状态最小均方误差为预测得到位姿状态预测值时,与对应的误差值相关的参数。 根据位姿状态最小均方误差可以降低最终结果的误差值,提高准确性。需要说明的是,在机械臂领域中,位姿状态预测值和位姿状态最小均方误差均为矩阵。在获取位姿状态参数时可以是获取运行参数或者系统参数等各类参数,并对各类参数进行计算得到位姿状态参数。其具体的计算规则本实施例不作限定。
通常,机械臂目标位置的状态矩阵可以表示为:
Figure PCTCN2017103263-appb-000001
其中,t表示当前采样时刻,X(t)是预测得到位姿状态预测值,S(t)是t时刻机械臂目标位置的位姿,一般为6维向量,V(t)是t时刻机械臂目标位置的速度,一般为6维向量。
需要说明的是,上述仅为X(t)的表示形式,而确定X(t)的状态方程一般可以表示为:
X(t)=AX(t-1)+Ba(t)+W(t)             (2)
Figure PCTCN2017103263-appb-000002
其中,I是6×6的单位对角矩阵,T为6×6的对角矩阵,对角线上的元素均为采样间隔(采样周期),用ΔT表示。T1为6×6的对角矩阵,对角线上的元素均为
Figure PCTCN2017103263-appb-000003
T2为6×6的对角矩阵,对角线上的元素均为ΔT,也可以理解为T和T2是相同的矩阵,a(t)为t时刻机械臂目标位置位姿的期望加速度,其可以通过运动规划确定,X(t-1)为上一采样时刻预测得到位姿状态预测值,W(t)表示模型噪声,其一般满足高斯分布,为12维向量。
一般而言,根据上述方法确定X(t)时,由于拍摄延迟会导致W(t)以及X(t-1)的值误差较大,导致得到的X(t)的误差也较大。因此,本实施例中分别确定位姿状态预测值和位姿状态最小均方误差,并且在确定位姿状态预测值时暂时不考 虑W(t)。在后续处理过程中,再分别对位姿状态预测值和位姿状态最小均方误差进行修正,以保证最终结果的准确性。
可选的,在确定位姿状态最小均方误差时,可以根据上一采样时刻的位姿状态最小均方误差确定当前采样时刻位姿状态最小均方误差,即可以确定出位姿状态最小均方误差的变化规律,并根据该变化规律确定每个采样时刻的位姿状态最小均方误差。
需要说明的是,由于后续过程对位姿状态预测值以及位姿状态最小均方误差进行修正,得到了当前时刻的修正预测值和修正最小均方误差。因此,在确定下一采样时刻的位姿状态预测值以及位姿状态最小均方误差时,优选采用当前采样时刻得到的修正预测值和修正最小均方误差。
S120、对位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差。
由于触发拍摄功能和图像采集装置实际执行拍摄动作之间存在误差,因此在触发拍摄功能时获取的位姿状态预测值和位姿状态最小均方误差与实际拍摄图像时机械臂目标位置的实际位姿状态预测值和实际位姿状态最小均方误差之间存在误差。可见,获取位姿状态预测值和位姿状态最小均方误差后,需要对位姿状态预测值和位姿状态最小均方误差进行修正,以保证修正后得到的修正预测值和修正最小均方误差更加符合实际位姿状态预测值和实际位姿状态最小均方误差。
具体的,对位姿状态预测值和位姿状态最小均方误差进行修正的方法可以为:对位姿状态最小均方误差进行计算以得到基于当前位姿状态最小均方误差的误差增益,并根据误差增益分别对位姿状态预测值和位姿状态最小均方误差进行修正。其中,误差增益的具体计算方法本实施例不作限定。
S130、根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿。
一般情况下,图像采集装置在拍摄图像时,通常只会拍摄目标对象的待拍摄面的部分区域,其中,该部分区域被称为拍摄区域。可选的,拍摄得到的图像中呈现的拍摄区域为对实际拍摄的部分区域进行放大后的图像。
进一步的,拍摄区域位姿为拍摄区域实际对应的部分区域在实际坐标系中的位姿,其中,该实际坐标系与机械臂采用的坐标系相同。根据每次采样时刻确定的拍摄区域位姿可以确定相应的图像中拍摄区域的实际位姿,进而将拍摄的图像进行拼接组合,以得到完整的目标对象被摄面的图像,以便后续对该图像进行操作。
具体的,根据修正预测值和修正最小均方误差可以确定机械臂目标位置的位姿,进而根据机械臂目标位置的位姿推测出当前拍摄区域位姿。一般情况下,每次确定当前采样时刻对应的图像中目标对象的拍摄区域位姿后,都会调整机械臂目标位置的位姿,以保证下一采样时刻图像采集装置拍摄目标对象的其他拍摄区域,直到得到目标对象当前被摄面的全部拍摄区域图像为止。
本实施例提供的技术方案,通过在每个采样时刻触发拍摄功能并获取对应的机械臂目标位置的位姿状态预测值以及位姿状态最小均分误差,对得到的位姿状态预测值以及位姿状态最小均分误差进行修正,以根据修正后的结果确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿的技术手段,避免了由于拍摄延时导致的获取机械臂目标位置位姿数据存在误差的现象,提高了确定拍摄图像中目标对象的拍摄区域位姿的准确性,进而保证了各拍摄图像合成时的准确度,以便于更准确地进行后续处理。
实施例二
图2a为本发明实施例二提供的一种位姿误差修正方法的流程图。本实施例是在上述实施例的基础上进行具体化。具体的,获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差具体包括:获取对应采样时刻机械臂目标位置的测量参数和状态参数;根据状态参数确定位姿状态预测值以及位姿状态最小均方误差。
相应的,对位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差具体包括:根据位姿状态最小均方误差和测量参数确定误差增益;根据误差增益、测量参数和状态参数对位姿状态预测值以及位姿状态最小均方误差进行修正。
进一步的,在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差之前,还具体包括:对机械臂目标位置进行运动规划,以在每个采样时刻时,根据运动规划结果确定目标位置位姿的运动参数。
参考图2a,本实施例提供的位姿误差修正方法具体包括:
S210、对机械臂目标位置进行运动规划,以在每个采样时刻时,根据运动规划结果确定目标位置位姿的运动参数。
示例性的,运动规划是对机械臂目标位置各运行时刻进行运动规划,以确定各运行时刻机械臂目标位置期望达到的运动参数。运动参数包括期望达到的位置、速度以及加速度中的至少一个。具体的运动规划方法本实施例不作限定,下面仅以五次多项式发对运动规划的过程进行示例性描述:
五次多项式法的运功规划过程可以表示为:
S(t)=a0+a1t+a2t2+a3t3+a4t4+a5t5           (3)
其中,a0、a1、a2、a3、a4以及a5为规划系数,t为运动对象(本实施例中为机械臂目标位置)当前的采样时刻,S(t)为t时刻的运功规划结果。
从上述公式可知,如果想要确定机械臂目标位置的运功规划结果,需要明确规划系数的具体值,且规划系数的具体值可以根据机械臂目标位置初始运行时的初始运动参数确定。
进一步的,初始运动参数确定规划系数的具体过程为:
设定初始运动参数包括:机械臂目标位置的初始时刻目标位置θ0、机械臂目标位置的初始时刻目标速度
Figure PCTCN2017103263-appb-000004
机械臂目标位置的初始时刻目标加速度
Figure PCTCN2017103263-appb-000005
机械臂目标位置的初始位置θ(0)、机械臂目标位置的初始速度
Figure PCTCN2017103263-appb-000006
机械臂目标位置的初始加速度
Figure PCTCN2017103263-appb-000007
和采样周期T,那么可以得到:
a0=θ(0)              (3-1)
Figure PCTCN2017103263-appb-000008
Figure PCTCN2017103263-appb-000009
Figure PCTCN2017103263-appb-000010
Figure PCTCN2017103263-appb-000011
Figure PCTCN2017103263-appb-000012
进一步的,确定规划系数后,式(3)可以表示为:
θ1(t)=a0+a1t+a2t2+a3t3+a4t4+a5t5              (4)
其中,θ1(t)表示为t时刻机械臂目标位置位姿期望达到的位置。对式(4)进行微分计算,可得:
Figure PCTCN2017103263-appb-000013
其中,
Figure PCTCN2017103263-appb-000014
表示为t时刻机械臂目标位置位姿期望达到的速度。对式(5)进行微分计算,可得:
Figure PCTCN2017103263-appb-000015
其中,
Figure PCTCN2017103263-appb-000016
表示为t时刻机械臂目标位置位姿期望达到的加速度。
进一步的,式(4)、式(5)以及式(6)为构造的机械臂目标位置的运动规划公式。根据上述运动规划公式便可以得到机械臂目标位置位姿任意运行时刻期望达到的运动参数。需要说明的是,在实际应该过程中,可以根据实际情况在式(4)、式(5)以及式(6)中选择性的构造至少一个运动规划公式。
具体的,运动参数属于状态参数,状态参数为用于确定位姿状态参数的参数。
S220、在每个采样时刻触发拍摄功能,获取对应采样时刻机械臂目标位置的测量参数和状态参数。
其中,测量参数是用于得到机械臂目标位置位姿测量值的参数。需要说明的是,确定机械臂目标位置位姿测量值时不仅需要测量参数,还需要机械臂目标位置位姿状态预测值,即结合机械臂目标位置位姿状态预测值得到机械臂目标位置位姿测量值。
可选的,确定机械臂目标位置位姿测量值的测量方程具体为:
Z(t)=HX(t)+D(t)                  (7)
其中,t表示当前采样时刻,Z(t)是得到的t时刻机械臂目标位置位姿测量值。H是预设观测矢量,其为12×12的单位方阵,观测矢量一般在装置初始化时确定且后续过程中保持不变。X(t)是t时刻位姿状态预测值,D(t)表示测量噪声,其一般满足高斯分布,为12维向量。
一般而言,在对位姿状态预测值进行修正以得到最终的修正预测值时,需 要考虑当前采样时刻机械臂目标位置位姿测量值,以保证结果的准确性。
S230、根据状态参数确定位姿状态预测值以及位姿状态最小均方误差。
为了保证得到最终结果的准确性,在确定位姿状态参数时分别求出不包含模型噪声的位姿状态预测值以及与模型噪声相关的位姿状态最小均方误差。
进一步的,状态参数至少包括:上一采样时刻的修正最小均方误差、上一采样时刻的修正预测值、采样周期、当前采样时刻目标位置位姿的运动参数以及预设第一协方差矩阵等。其中,修正最小均方误差为对位姿状态最小均方误差进行修正后得到的最终误差参数。修正预测值为对位姿状态预测值进行修正后得到的最终预测值。预设第一协方差矩阵为预先设定的模型噪声的协方差矩阵,其具有不变性。目标位置位姿的运动参数可以通过式(4)、式(5)或者式(6)确定。优选的,运动参数为期望达到的加速度,其可以通过式(6)确定。
具体的,参考图2b,确定位姿状态预测值和位姿状态最小均方误差时,该步骤具体包括:
S231、根据采样周期、上一采样时刻的修正预测值和当前采样时刻目标位置位姿的运动参数确定位姿状态预测值。
具体的,式(2)给出了位姿状态预测值的一般表示形式,根据式(2)可以推导出不包含模型噪声的位姿状态预测值,其具体表示为:
Figure PCTCN2017103263-appb-000017
其中,X(t|t-1)表示t时刻得到的位姿状态预测值,
Figure PCTCN2017103263-appb-000018
Figure PCTCN2017103263-appb-000019
I是6×6的单位对角矩阵,T为6×6的对角矩阵,对角线上的元素均为采样间隔(采样周期),用ΔT表示。T1为6×6的对角矩阵,对角线上 的元素均为
Figure PCTCN2017103263-appb-000020
T2为6×6的对角矩阵,对角线上的元素均为ΔT,也可以理解为T和T2是相同的矩阵,a(t)为t时刻机械臂目标位置位姿的期望加速度,可以通过式(6)确定,
Figure PCTCN2017103263-appb-000021
表示上一采样时刻(t-1)机械臂目标位置的修正预测值。
需要说明的是,在本步骤确定的位姿状态预测值不包含当前采样时刻的误差参数,因此本步骤确定的位姿状态预测值并不是最终确定的预测值。
S232、根据采样周期、上一采样时刻的修正最小均方误差和预设第一协方差矩阵确定位姿状态最小均方误差。
可选的,位姿状态最小均方误差具体为:
P(t|t-1)=AP(t-1|t-1)AT+Q              (9)
其中,P(t|t-1)表示t时刻得到的位姿状态最小均方误差,
Figure PCTCN2017103263-appb-000022
I是6×6的单位对角矩阵,T为6×6的对角矩阵,对角线上的元素均为采样间隔(采样周期),用ΔT表示,Q为预设第一协方差矩阵,P(t-1|t-1)为上一采样时刻(t-1)的修正最小均方误差。
需要说明的是,在本步骤确定的位姿状态最小均方误差为未经修正的值,并不是最终确定的误差参数。
S240、根据位姿状态最小均方误差和测量参数确定误差增益。
示例性的,误差增益可以体现出当前得到的位姿状态最小均方误差对位姿测量值的影响。
具体的,测量参数至少包括:预设观测矢量和预设第二协方差矩阵等。其中,预设观测矢量为预先设定的12×12单位方阵。预设第二协方差矩阵为预先设定的测量噪声的协方差矩阵,其具有不变性。
可选的,误差增益具体为:
K(t)=P(t|t-1)H[R+HP(t|t-1)HT]-1            (10)
其中,K(t)表示t时刻得到的误差增益,P(t|t-1)表示t时刻得到的位姿状态最小均方误差,H为预设观测矢量,R为预设第二协方差矩阵。
S250、根据误差增益、测量参数和状态参数对位姿状态预测值以及位姿状态最小均方误差进行修正。
示例性的,根据上述确定的误差增益结合测量参数和状态参数对位姿状态参数进行修正,以保证修正后的位姿状态参数的准确性。
可选的,参考图2c,对位姿状态参数进行修正时采用的方法可选包括:
S251、根据位姿状态预测值、预设观测矢量和预设第二协方差矩阵确定位姿测量值。
具体的,式(7)示出了位姿测量值的表示形式,本步骤中直接使用式(7)确定位姿测量值,其中,D(t)的计算方法本实施例不作限定,一般而言,D(t)的具体值取决于与预设第二协方差矩阵和当前采样时刻。
S252、根据采样周期、误差增益、预设观测矢量和位姿测量值对位姿状态预测值进行修正,以得到修正预测值。
可选的,修正位姿状态预测值时,具体的修正公式参考如下:
Figure PCTCN2017103263-appb-000023
其中,
Figure PCTCN2017103263-appb-000024
表示t时刻得到的修正预测值,X(t|t-1)表示t时刻得到的位姿状态预测值,
Figure PCTCN2017103263-appb-000025
I是6×6的单位对角矩阵,T为6×6的对角矩阵,对角线上的元素均为采样间隔(采样周期),用ΔT表示,K(t)表示t时刻得到的误差增益,H为预设观测矢量,Z(t)为t时刻机械臂目标位置的位姿测量值。
S253、根据误差增益和预设观测矢量对位姿状态最小均方误差进行修正,以得到修正最小均方误差。
可选的,修正最小均方误差时,具体的修正公式参考如下:
P(t|t)=[I-K(t)H]P(t|t-1)             (12)
其中,P(t|t)表示t时刻得到的修正最小均方误差,I是6×6的单位对角矩阵,K(t)表示t时刻得到的误差增益,H为预设观测矢量,P(t|t-1)表示t时刻得到的位姿状态最小均方误差。
S260、根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿。
具体的,得到当前采样时刻的修正预测值和修正最小均方误差后,确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿并判断是否完成对目标对象检测面的拍摄操作,如果完成拍摄操作,则停止拍摄操作,并根据各采样时刻得到的拍摄区域位姿对拍摄的图像进行后续处理,如果没有完成拍摄操作,则在下一采样时刻继续执行拍摄操作,并根据当前采用时刻的修正预测值和修正最小均方误差确定下一采样时刻的修正预测值和修正最小均方误差,直到完成拍摄操作为止。
一般而言,初始化时确定修正预测值的初始值和修正最小均方误差的初始值,后续第一个采样时刻确定修正预测值和修正最小均方误差时,直接使用修正预测值的初始值和修正最小均方误差的初始值。
下面对本实施例提供的位姿误差修正方法进行示例性说明:
设定目标对象为板卡,并将板卡设置在机械臂末端位置上。图像采集设备为相机。将板卡的焊锡面面向相机,以使相机对焊锡面进行拍摄。每个采样时刻对板卡焊锡面进行拍照,且每次仅能拍摄焊锡面的部分区域。
可选的,利用卡尔曼滤波器确定修正预测值和修正最小均方误差。
具体的,对设备进行初始化,并确定预设第一协方差矩阵、预设第二协方差矩阵、预设观测矢量、采样间隔(如1ms)。初始化模型噪声,确定修正预测值的初始值为
Figure PCTCN2017103263-appb-000026
修正最小均方误差的初始值为P(0|0)。
进一步的,设定初始运动参数,以确定机械臂末端位置的运动规划方程,其具体为式(6)。
根据采样间隔在每个采样时刻触发拍摄操作,根据式(8)确定对应采样时刻的位姿状态预测值,根据式(9)确定对应采样时刻的位姿状态最小均方误差。在确定位姿状态预测值和位姿状态最小均方误差后,根据式(10)确定对应采样时刻的误差增益。
进一步的,测量当前机械臂末端位置对应的关节的位置和速度,以根据机械臂正运动学确定机械臂末端位置的位姿测量值。根据式(11)确定对应采样时刻的修正预测值,根据式(12)确定对应采样时刻的修正最小均方误差。
根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的照片中板卡焊锡面的拍摄区域的位姿,并确定当前是否完成拍摄操作,如果没有完成拍摄操作,则返回根据采样间隔在每个采样时刻触发拍摄操作的操作。
如果完成拍摄操作,则根据各照片中板卡焊锡面的拍摄区域的位姿对各照片进行结合,以得到包含板卡完整焊锡面的照片,进而根据合成的照片对板卡的焊锡面进行检测,以确定根据该焊锡面确定板卡是否合格。
本实施例提供的技术方案,在每个采样时刻触发拍摄功能时,获取机械臂目标位置的测量参数和状态参数,并根据状态参数确定位姿状态预测值以及位姿状态最小均方误差,进而根据位姿状态最小均方误差和测量参数确定误差增益,以实现根据误差增益对位姿状态预测值以及位姿状态最小均方误差进行修 正,保证最终得到的修正预测值和修正最小均方误差的准确性,避免了由于拍摄延时导致的获取机械臂目标位置位姿数据存在误差的现象,提高了确定拍摄图像中目标对象的拍摄区域位姿的准确性,进而保证了各拍摄图像合成时的准确度,以便于更准确地进行后续处理。
实施例三
图3为本发明实施例三提供的一种位姿误差修正装置的结构示意图。参考图3,本实施例提供的位姿误差修正装置具体包括:参数获取模块301、修正模块302以及位姿确定模块303。
其中,参数获取模块301,用于在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与机械臂目标位置的位姿相对固定;修正模块302,用于对位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;位姿确定模块303,用于根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿。
本实施例提供的技术方案,通过在每个采样时刻触发拍摄功能并获取对应的机械臂目标位置的位姿状态预测值以及位姿状态最小均分误差,对得到的位姿状态预测值以及位姿状态最小均分误差进行修正,以根据修正后的结果确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿的技术手段,避免了由于拍摄延时导致的获取机械臂目标位置位姿数据存在误差的现象,提高了确定拍摄图像中目标对象的拍摄区域位姿的准确性,进而保证了各拍摄图像合成时的准确度,以便于更准确地进行后续处理。
在上述实施例的基础上,参数获取模块301具体包括:拍摄触发单元,用 于在每个采样时刻触发拍摄功能;位置参数获取单元,用于获取对应采样时刻机械臂目标位置的测量参数和状态参数;参数确定单元,用于根据状态参数确定位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与机械臂目标位置相对固定。
相应的,修正模块302具体包括:误差增益确定单元,用于根据位姿状态最小均方误差和测量参数确定误差增益;参数修正单元,用于根据误差增益、测量参数和状态参数对位姿状态预测值以及位姿状态最小均方误差进行修正。
在上述实施例的基础上,还包括:运动规划模块,用于在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差之前,对机械臂目标位置进行运动规划,以在每个采样时刻时,根据运动规划结果确定目标位置位姿的运动参数,运动参数属于状态参数。
在上述实施例的基础上,状态参数包括:上一采样时刻的修正最小均方误差、上一采样时刻的修正预测值、采样周期、当前采样时刻目标位置位姿的运动参数以及预设第一协方差矩阵。
在上述实施例的基础上,参数确定单元具体包括:预测值确定子单元,用于根据采样周期、上一采样时刻的修正预测值和当前采样时刻目标位置位姿的运动参数确定位姿状态预测值;误差确定子单元,用于根据采样周期、上一采样时刻的修正最小均方误差和预设第一协方差矩阵确定位姿状态最小均方误差。
在上述实施例的基础上,测量参数包括:预设观测矢量和预设第二协方差矩阵。
在上述实施例的基础上,参数修正单元具体包括:测量值确定子单元,用于根据位姿状态预测值、预设观测矢量和预设第二协方差矩阵确定位姿测量值; 预测值修正单元,用于根据采样周期、误差增益、预设观测矢量和位姿测量值对位姿状态预测值进行修正,以得到修正预测值;误差修正单元,用于根据误差增益和预设观测矢量对位姿状态最小均方误差进行修正,以得到修正最小均方误差。
本实施例提供的位姿误差修正装置适用于上述任意实施例提供的位姿误差修正方法,具备相应的功能和有益效果。
实施例四
图4为本发明实施例四提供的一种机器人的结构示意图。如图4所示,该机器人包括处理器40、存储装置41、图像采集装置42、机械臂43、输入装置44和输出装置45;机器人中处理器40的数量可以是一个或多个,图4中以一个处理器40为例;机器人中的处理器40、存储装置41、图像采集装置42、机械臂43、输入装置44和输出装置45可以通过总线或其他方式连接,图4中以通过总线连接为例。
其中,图像采集装置42,用于拍摄图像;机械臂43,用于固定目标对象。
存储装置41作为一种计算机可读存储介质,用于存储一个或多个程序,如本发明实施例中的位姿误差修正方法对应的程序指令/模块(例如,位姿误差修正装置中的参数获取模块301、修正模块302和位姿确定模块303)。处理器40通过运行存储在存储装置41中的软件程序、指令以及模块,从而执行机器人的各种功能应用以及数据处理,即实现上述的位姿误差修正。需要说明的是,图1b的控制器中包括:处理器40和存储装置41,可选包括:输入装置44和输出装置45。
存储装置41可主要包括存储程序区和存储数据区,其中,存储程序区可存 储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据机器人的使用所创建的数据等。此外,存储装置41可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储装置41可进一步包括相对于处理器40远程设置的存储器,这些远程存储器可以通过网络连接至机器人。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置44可用于接收输入的数字或字符信息,以及产生与机器人的用户设置以及功能控制有关的键信号输入。输出装置45可包括显示屏等显示设备。
实施例五
本发明实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种位姿误差修正方法,该方法包括:
在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与机械臂目标位置的位姿相对固定;
对位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;
根据修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中目标对象的拍摄区域位姿。
当然,本发明实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的位姿误差修正方法操作,还可以执行本发明任意实施例所提供的位姿误差修正方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是机器人,个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述的位姿误差修正方法。
值得注意的是,上述位姿误差修正装置的实施例中,所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种位姿误差修正方法,其特征在于,包括:
    在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与所述机械臂目标位置的位姿相对固定;
    对所述位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;
    根据所述修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中所述目标对象的拍摄区域位姿。
  2. 根据权利要求1所述的位姿误差修正方法,其特征在于,所述获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差包括:
    获取对应采样时刻机械臂目标位置的测量参数和状态参数;
    根据所述状态参数确定位姿状态预测值以及位姿状态最小均方误差;
    相应的,所述对所述位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差包括:
    根据所述位姿状态最小均方误差和所述测量参数确定误差增益;
    根据所述误差增益、所述测量参数和所述状态参数对所述位姿状态预测值以及所述位姿状态最小均方误差进行修正。
  3. 根据权利要求2所述的位姿误差修正方法,其特征在于,所述在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差之前,还包括:
    对机械臂目标位置进行运动规划,以在每个采样时刻时,根据运动规划结果确定所述目标位置位姿的运动参数,所述运动参数属于所述状态参数。
  4. 根据权利要求2所述的位姿误差修正方法,其特征在于,所述状态参数 包括:上一采样时刻的修正最小均方误差、上一采样时刻的修正预测值、采样周期、当前采样时刻目标位置位姿的运动参数以及预设第一协方差矩阵。
  5. 根据权利要求4所述的位姿误差修正方法,其特征在于,所述根据所述状态参数确定所述位姿状态预测值以及位姿状态最小均方误差包括:
    根据所述采样周期、所述上一采样时刻的修正预测值和所述当前采样时刻目标位置位姿的运动参数确定所述位姿状态预测值;
    根据所述采样周期、所述上一采样时刻的修正最小均方误差和所述预设第一协方差矩阵确定所述位姿状态最小均方误差。
  6. 根据权利要求4所述的位姿误差修正方法,其特征在于,所述测量参数包括:预设观测矢量和预设第二协方差矩阵。
  7. 根据权利要求6所述的位姿误差修正方法,其特征在于,所述根据所述误差增益、所述测量参数和所述状态参数对所述位姿状态预测值以及所述位姿状态最小均方误差进行修正包括:
    根据所述位姿状态预测值、所述预设观测矢量和所述预设第二协方差矩阵确定所述位姿测量值;
    根据所述采样周期、所述误差增益、所述预设观测矢量和所述位姿测量值对所述位姿状态预测值进行修正,以得到修正预测值;
    根据所述误差增益和所述预设观测矢量对所述位姿状态最小均方误差进行修正,以得到修正最小均方误差。
  8. 一种位姿误差修正装置,其特征在于,包括:
    参数获取模块,用于在每个采样时刻触发拍摄功能,并获取机械臂目标位置的位姿状态预测值以及位姿状态最小均方误差,其中,拍摄的目标对象与所述机械臂目标位置的位姿相对固定;
    修正模块,用于对所述位姿状态预测值以及位姿状态最小均方误差进行修正,以得到修正预测值和修正最小均方误差;
    位姿确定模块,用于根据所述修正预测值和修正最小均方误差确定对应采样时刻拍摄的图像中所述目标对象的拍摄区域位姿。
  9. 一种机器人,其特征在于,所述机器人包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    图像采集装置,用于拍摄图像;
    机械臂,用于固定目标对象;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的位姿误差修正方法。
  10. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-7中任一所述的位姿误差修正方法。
PCT/CN2017/103263 2017-05-18 2017-09-25 位姿误差修正方法及装置、机器人及存储介质 WO2018209862A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710354828.0 2017-05-18
CN201710354828.0A CN107030699B (zh) 2017-05-18 2017-05-18 位姿误差修正方法及装置、机器人及存储介质

Publications (1)

Publication Number Publication Date
WO2018209862A1 true WO2018209862A1 (zh) 2018-11-22

Family

ID=59539006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103263 WO2018209862A1 (zh) 2017-05-18 2017-09-25 位姿误差修正方法及装置、机器人及存储介质

Country Status (2)

Country Link
CN (1) CN107030699B (zh)
WO (1) WO2018209862A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110631588A (zh) * 2019-09-23 2019-12-31 电子科技大学 一种基于rbf网络的无人机视觉导航定位方法
CN113274136A (zh) * 2021-05-17 2021-08-20 上海微创医疗机器人(集团)股份有限公司 位姿调整方法、手术机器人系统和存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107030699B (zh) * 2017-05-18 2020-03-10 广州视源电子科技股份有限公司 位姿误差修正方法及装置、机器人及存储介质
CN109959381B (zh) * 2017-12-22 2021-06-04 深圳市优必选科技有限公司 一种定位方法、装置、机器人及计算机可读存储介质
CN108068115B (zh) * 2017-12-30 2021-01-12 福建铂格智能科技股份公司 一种基于视觉反馈的并联机器人位置闭环校准算法
CN109633666B (zh) * 2019-01-18 2021-02-02 广州高新兴机器人有限公司 室内动态环境下基于激光雷达的定位方法及计算机存储介质
CN110370280B (zh) * 2019-07-25 2021-11-30 深圳市天博智科技有限公司 机器人行为的反馈控制方法、系统和计算机可读存储介质
CN112785682A (zh) * 2019-11-08 2021-05-11 华为技术有限公司 模型的生成方法、模型的重建方法及装置
CN111912337B (zh) 2020-07-24 2021-11-09 上海擎朗智能科技有限公司 机器人位姿信息的确定方法、装置、设备和介质
CN111881836A (zh) * 2020-07-29 2020-11-03 上海商汤临港智能科技有限公司 目标对象识别方法及相关装置、设备
CN112348878B (zh) * 2020-10-23 2023-03-21 歌尔科技有限公司 定位测试方法、装置及电子设备
CN112975983B (zh) * 2021-03-16 2022-04-01 上海三一重机股份有限公司 作业机械的动臂矫正方法及装置
CN113843796B (zh) * 2021-09-30 2023-04-28 上海傅利叶智能科技有限公司 数据传输方法及装置、联机机器人的控制方法及装置、联机机器人
CN113936467B (zh) * 2021-11-17 2022-12-16 同济大学 一种道路拍摄方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (zh) * 2014-12-17 2015-05-20 西南科技大学 一种非结构环境下机器人作业的位姿信息测量方法
CN105354433A (zh) * 2015-11-24 2016-02-24 北京邮电大学 一种空间机械臂参数对运动可靠性影响比重的确定方法
CN106251282A (zh) * 2016-07-19 2016-12-21 中国人民解放军63920部队 一种机械臂采样环境仿真图的生成方法及装置
WO2017072281A1 (de) * 2015-10-30 2017-05-04 Keba Ag Verfahren, steuerungssystem und bewegungsvorgabemittel zum steuern der bewegungen von gelenkarmen eines industrieroboters
CN107030699A (zh) * 2017-05-18 2017-08-11 广州视源电子科技股份有限公司 位姿误差修正方法及装置、机器人及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2919135B2 (ja) * 1991-10-09 1999-07-12 川崎重工業株式会社 ロボットの位置補正方法
CN101402199B (zh) * 2008-10-20 2011-01-26 北京理工大学 基于视觉的手眼式低伺服精度机器人抓取移动目标的方法
CN102059703A (zh) * 2010-11-22 2011-05-18 北京理工大学 基于自适应粒子滤波的机器人视觉伺服控制方法
FI20115326A0 (fi) * 2011-04-05 2011-04-05 Zenrobotics Oy Menetelmä sensorin mittausten mitätöimiseksi poimintatoiminnon jälkeen robottijärjestelmässä
KR101401415B1 (ko) * 2012-06-29 2014-05-30 한국과학기술연구원 접촉 및 힘 제어를 포함한 일관된 동작 생성을 위한 로봇 제어 장치 및 방법
CN104476549B (zh) * 2014-11-20 2016-04-27 北京卫星环境工程研究所 基于视觉测量的机械臂运动路径补偿方法
CN105196292B (zh) * 2015-10-09 2017-03-22 浙江大学 一种基于迭代变时长视觉伺服控制方法
CN106041927A (zh) * 2016-06-22 2016-10-26 西安交通大学 结合eye‑to‑hand和eye‑in‑hand结构的混合视觉伺服系统及方法
CN106123901B (zh) * 2016-07-20 2019-08-06 上海乐相科技有限公司 一种定位方法及装置
CN106182003A (zh) * 2016-08-01 2016-12-07 清华大学 一种机械臂示教方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104626206A (zh) * 2014-12-17 2015-05-20 西南科技大学 一种非结构环境下机器人作业的位姿信息测量方法
WO2017072281A1 (de) * 2015-10-30 2017-05-04 Keba Ag Verfahren, steuerungssystem und bewegungsvorgabemittel zum steuern der bewegungen von gelenkarmen eines industrieroboters
CN105354433A (zh) * 2015-11-24 2016-02-24 北京邮电大学 一种空间机械臂参数对运动可靠性影响比重的确定方法
CN106251282A (zh) * 2016-07-19 2016-12-21 中国人民解放军63920部队 一种机械臂采样环境仿真图的生成方法及装置
CN107030699A (zh) * 2017-05-18 2017-08-11 广州视源电子科技股份有限公司 位姿误差修正方法及装置、机器人及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110631588A (zh) * 2019-09-23 2019-12-31 电子科技大学 一种基于rbf网络的无人机视觉导航定位方法
CN113274136A (zh) * 2021-05-17 2021-08-20 上海微创医疗机器人(集团)股份有限公司 位姿调整方法、手术机器人系统和存储介质

Also Published As

Publication number Publication date
CN107030699B (zh) 2020-03-10
CN107030699A (zh) 2017-08-11

Similar Documents

Publication Publication Date Title
WO2018209862A1 (zh) 位姿误差修正方法及装置、机器人及存储介质
TWI672206B (zh) 機械手臂非接觸式工具中心點校正裝置及其方法以及具有校正功能的機械手臂系統
US20200096317A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
JP6622503B2 (ja) カメラモデルパラメータ推定装置及びそのプログラム
TWI404609B (zh) 機械手臂系統參數的校正方法與校正裝置
WO2021012124A1 (zh) 机器人手眼标定方法、装置、计算设备、介质以及产品
WO2021012122A1 (zh) 机器人手眼标定方法、装置、计算设备、介质以及产品
JP6324025B2 (ja) 情報処理装置、情報処理方法
JP2020183035A5 (zh)
CN112171666B (zh) 视觉机器人的位姿标定方法及装置、视觉机器人、介质
CN110193849A (zh) 一种机器人手眼标定的方法及装置
WO2018209592A1 (zh) 一种机器人的运动控制方法、机器人及控制器
US20150042784A1 (en) Image photographing method and image photographing device
KR102111655B1 (ko) 로봇 비전 시스템을 위한 자동 캘리브레이션 방법 및 장치
TWI772731B (zh) 控制裝置及控制方法
CN109421050A (zh) 一种机器人的控制方法及装置
US20220327721A1 (en) Size estimation device, size estimation method, and recording medium
JP2017135495A (ja) ステレオカメラおよび撮像システム
CN109389645B (zh) 相机自校准方法、系统、相机、机器人及云端服务器
WO2016123813A1 (zh) 一种智能设备的姿态关系计算方法和智能设备
JP2019077026A (ja) 制御装置、ロボットシステム、制御装置の動作方法及びプログラム
JP2022032975A (ja) 魚体長計測装置および魚体長計測方法
CN113658270A (zh) 基于工件孔心的多目视觉标定的方法、装置、介质及系统
JP2021124395A (ja) パン・チルト角算出装置及びそのプログラム
CN116408790B (zh) 机器人控制方法、装置、系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910367

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.03.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17910367

Country of ref document: EP

Kind code of ref document: A1