WO2020220284A1 - 一种瞄准控制方法、移动机器人及计算机可读存储介质 - Google Patents

一种瞄准控制方法、移动机器人及计算机可读存储介质 Download PDF

Info

Publication number
WO2020220284A1
WO2020220284A1 PCT/CN2019/085245 CN2019085245W WO2020220284A1 WO 2020220284 A1 WO2020220284 A1 WO 2020220284A1 CN 2019085245 W CN2019085245 W CN 2019085245W WO 2020220284 A1 WO2020220284 A1 WO 2020220284A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
mobile robot
observation
sight
orientation
Prior art date
Application number
PCT/CN2019/085245
Other languages
English (en)
French (fr)
Inventor
匡正
关雁铭
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980002956.6A priority Critical patent/CN110876275A/zh
Priority to PCT/CN2019/085245 priority patent/WO2020220284A1/zh
Publication of WO2020220284A1 publication Critical patent/WO2020220284A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices

Definitions

  • the present invention relates to the field of electronic technology, in particular to an aiming control method, a mobile robot and a computer-readable storage medium.
  • automatic aiming technology refers to a servo control technology that controls the relative orientation of the target object and the mobile robot to a desired value during the movement of the target object and/or the mobile robot.
  • commonly used aiming control methods are designed based on errors.
  • the current position of the target object relative to the mobile robot can be obtained through the image sensor.
  • the difference between the desired position and the actual position of the mobile robot is determined based on the position.
  • a control strategy for the mobile robot is generated.
  • Practice has proved that the above method has defects such as large steady-state error, poor anti-noise ability, and control lag, resulting in inaccurate aiming control of the mobile robot.
  • the embodiments of the present invention provide an aiming control method, a mobile robot, and a computer-readable storage medium, which can improve the accuracy of aiming control.
  • an embodiment of the present invention provides an aiming control method, the method is applied to a mobile robot, the mobile robot includes a sight, and the method includes:
  • the motion control parameter is determined according to the angular velocity, and the motion control parameter is used to control the sight to move in the direction of the target object.
  • an embodiment of the present invention provides a mobile robot, including a sight, a memory, and a processor:
  • the memory is used to store program code
  • the processor is configured to call the program code, and when the program code is executed, to perform the following operations:
  • the motion control parameter is determined according to the angular velocity, and the motion control parameter is used to control the sight to move in the direction of the target object.
  • an embodiment of the present invention provides a computer-readable storage medium, the computer-readable storage medium stores computer program instructions, and when the computer program instructions are executed, they are used to implement the above-mentioned first aspect. Aiming control method.
  • the angular velocity of the target object is determined according to the observation position; and then the angular velocity of the target object is determined to be used for controlling the sight to the target object The motion control parameter of the movement in the direction.
  • the angular velocity of the target object is used as the feedforward signal of aiming control.
  • the aiming device can be accurately controlled to move along the direction of the target object, which improves The accuracy of aiming control is improved.
  • Figure 1 is a schematic diagram of an automatic aiming provided by an embodiment of the present invention
  • Figure 2a is a schematic diagram of an aiming control system provided in the prior art
  • Figure 2b is a schematic diagram of an aiming control system provided by an embodiment of the present invention.
  • FIG. 3 is a flowchart of an aiming control method provided by an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another aiming control method provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an aiming control module provided by an embodiment of the present invention.
  • Fig. 6 is a schematic structural diagram of a mobile robot provided by an embodiment of the present invention.
  • Figure 1 is a schematic diagram of an automatic aiming provided by an embodiment of the present invention.
  • 101 represents a target object
  • 102 represents a mobile robot.
  • the target object may include a mobile robot device or other ground stationary Equipment
  • 103 indicates the sight configured on the mobile robot
  • the arrow indicates the moving direction of the mobile robot.
  • Automatic aiming refers to controlling the relative position between the mobile robot and the target object to be at a desired value, so that the mobile robot can aim and hit the target object.
  • Fig. 2a is a schematic diagram of an aiming control system provided in the prior art.
  • the aiming control system shown in FIG. 2a may include a target object, a mobile robot (such as a wheeled mobile robot), a detector, a tracker, a controller, and an actuator.
  • the detector, the tracker, the controller and the actuator constitute an automatic aiming module in the aiming control system.
  • the image sensor of the mobile robot may collect the current moment image including the target object, and transfer the current moment image to the automatic aiming module; in response to the received current moment image, The automatic aiming module performs the following operations: the detector and tracker process the current moment image, determine the current moment of the target object's observation position relative to the mobile robot, and then input the observation position to the controller; The observation orientation determines the ideal orientation of the wheeled mobile robot, determines the motion control error according to the ideal orientation and the current orientation of the wheeled mobile robot, and further, generates a control feedback result according to the motion control error and the control law of the controller, The control feedback result is transmitted to the actuator, and the actuator performs aiming control on the wheeled mobile robot based on the control feedback result.
  • the relative orientation between the mobile robot and the target object may change.
  • the image sensor further collects the changed image and transmits it to the automatic aiming module to repeat the above process, thereby forming a closed loop feedback .
  • the controller in FIG. 2a may be a proportional-integral-differential (Proportional-Integral-Differential, PID) controller.
  • PID Proportional-Integral-Differential
  • e represents the error
  • K p represents the proportional coefficient
  • K i represents the integral coefficient
  • K d represents the differential coefficient
  • K p , K i and K d are used to adjust the current error, cumulative error and error change to control The degree of influence on the output of the device.
  • the PID controller constitutes a lead-lag corrector.
  • the control method based on the PID controller has Several flaws. One is that there may be steady-state errors that cannot be eliminated; the other is poor noise resistance.
  • the differential link of the PID controller has a global amplitude-frequency characteristic of +20db/dec, which means that the high-frequency noise of the system will be amplified by the differential link. causes the system output to tremble; third, the PID controller is an error-based feedback controller, that is, the output of the controller must be obtained after the error occurs, so that the output will always lag behind the error change.
  • the hit rate of the mobile robot on the target object is usually less than 10%.
  • the embodiment of the present invention proposes an aiming control system based on angular velocity feedforward, as shown in FIG. 2b. It is the same as the aiming control system described in Fig. 2a: the aiming control system described in Fig. 2b may also include detectors, trackers, controllers and detectors; the difference from Fig. 2a is: described in Fig. 2b In the aiming control system of, an angular velocity feedforward path is introduced.
  • the angular velocity feedforward path and the controller respectively process the observation azimuth determined by the detector and tracker to obtain the angular velocity feedforward result and the control feedback result of the controller , Finally, the angular velocity feedforward results and the control feedback results are merged and output to the actuator to achieve more accurate aiming control.
  • the aiming control system described in Figure 2b can improve the accuracy of aiming control by introducing angular velocity feedforward and combining it with the controller.
  • Practice has proved that by introducing angular velocity feedforward, the hit rate of the mobile robot on the target object can reach more than 50%.
  • the angular velocity feedforward output can also be used as the input of the actuator alone.
  • the embodiment of the present invention provides an aiming control method as shown in FIG. 3.
  • the aiming control method may be applied to the aiming control system shown in FIG. 2b, and the aiming control method may be applied to a mobile robot including a sight.
  • the aiming control method described in FIG. 3 may be executed by a mobile robot, specifically, may be executed by a processor of the mobile robot.
  • the aiming control method described in FIG. 3 may include the following steps:
  • Step S301 Obtain the current observation position of the target object relative to the mobile robot.
  • the target object refers to an object to be aimed at.
  • the observation orientation includes the direction and distance of the target object relative to the mobile robot at the current moment.
  • the observation orientation of the target object relative to the mobile robot may refer to the relative direction and distance from any point on the target object to any point on the mobile robot.
  • the sight of the mobile robot includes a camera
  • the observation orientation of the target object relative to the mobile robot may refer to: the center point of the coms sensor of the camera relative to the center point of the imaging plane where the target object is located Direction and distance.
  • the observation orientation of the target object relative to the mobile robot may also refer to the relative direction and distance between the center of mass of the mobile robot and the center of mass of the target object. It should be understood that the foregoing are only two definition methods of the observation orientation listed in the embodiment of the present invention. In other embodiments, those skilled in the art can set the definition of the observation orientation according to actual needs.
  • the sight of the mobile robot may include a camera, and the camera may be used to collect a target image containing the target object at the current moment, and then image segmentation or deep learning techniques are used to process the target image to determine the target object.
  • using image segmentation to determine the target object included in the target image refers to: segmenting the target image into at least one object region according to a preset segmentation rule; performing feature extraction on each object region after segmentation to obtain each object The feature information of the area; then compare the feature information of each object area with the pre-stored or predetermined feature information of the target object to determine whether the object area is the target object, and if it is, the object area is determined to contain The target object area of the target object. Based on the above process, the target object included in the target image can be determined.
  • the embodiment of the present invention can call the detector and tracker to execute the above-mentioned step of using the image segmentation technology to determine the target object included in the target image, specifically: the detector can Image segmentation technology is used to segment the target object from the target image.
  • the target object can be represented in the form of a rectangular frame; then the rectangular frame is passed to the tracker; the tracker is based on the received image color and gradient in the rectangular frame. And other information, and then add information such as the historical position and size of the rectangular frame to merge a rectangular frame with less noise, and the merged rectangular frame can be used to represent the target object.
  • the observation orientation of the target object relative to the mobile robot can be determined according to some information of the target object in the target image and information of the target image. Regarding how to determine the observing orientation of the target object relative to the mobile robot based on the above information, aiming will be carried out later.
  • Step S302 Determine the angular velocity of the target object according to the observation orientation.
  • the detector contains certain noise. Therefore, the target object determined by the detector and tracker may not be accurate enough. In this way, the target object relative to the observation of the mobile robot is determined based on the target object. There may also be errors caused by noise. If the angular velocity is determined directly according to the observation azimuth, it will cause errors in the final motion control parameters generated according to the angular velocity, thereby affecting the accuracy of the aiming control. Therefore, when determining the angular velocity of the target object according to the observation orientation in step S302, the observation orientation may be filtered first to eliminate errors included in the observation orientation.
  • the motion of the target object since the motion of the target object has the continuity of position and speed, that is, the position and speed of the target object at the next moment will not deviate greatly from the position and speed at the previous moment; in order to further improve the step S302
  • the accuracy of the determined angular velocity When the angular velocity is determined according to the observation position, the predicted position of the target object relative to the mobile robot at the current moment can be calculated according to the position and speed of the target object relative to the mobile robot at the previous moment;
  • the fusion filtering process is performed on the observation azimuth to obtain the angular velocity of the target object.
  • the specific implementation of performing fusion filtering processing on the predicted azimuth and the observation azimuth will be described in detail later.
  • Step S303 Determine a motion control parameter according to the angular velocity.
  • the motion control parameter is used to control the sight of the mobile robot to move in the direction of the target object, and the motion control parameter may include an angular velocity.
  • the aforementioned process of invoking the detector to determine the target object and the process of determining the angular velocity according to the observation orientation in step S302 takes a long time, resulting in a large overall time delay of the angular velocity feedforward path. If it is directly determined in step S302 The angular velocity is output to the actuator as a motion control parameter to control the motion of the mobile robot, which will cause oscillation.
  • step S302 related technical means are used to process the angular velocity of step S302 to achieve the purpose of speeding up the response.
  • the related technical means may include tracking-differentiation processing.
  • the tracking-differentiation processing uses a tracking-differentiator as a lead correction superimposed on the angular velocity, and the superimposed angular velocity is used as a motion control parameter.
  • the angular velocity of the target object is determined according to the observation orientation; and then the angular velocity of the target object is determined for controlling the sight to the target object The motion control parameter of the movement in the direction.
  • the angular velocity of the target object is used as the feedforward signal for aiming control, and the motion control parameters determined based on the feedforward signal can accurately control the aiming device to move along the direction of the target object, which improves the aiming The accuracy of control.
  • FIG. 4 is another aiming control method provided by an embodiment of the present invention.
  • the aiming control method can be applied to the aiming control system shown in FIG. 2, and the aiming control method is applied to a mobile robot.
  • the robot includes a sight.
  • the sight control method described in FIG. 4 may include the following steps:
  • Step S401 Obtain the current observation position of the target object relative to the mobile robot.
  • step S401 may be: determine the target object according to the target image collected by the camera; determine the target object at the current moment according to the target image and the target object Relative to the observation orientation of the mobile robot.
  • the observation orientation of the target object relative to the mobile robot at the current moment may be expressed in the form of rectangular coordinates, or may be expressed in the form of polar coordinates.
  • the determining the current observation orientation of the target object relative to the mobile robot according to the target image and the target object may include the following steps:( 1) Determine the polar diameter of the observation azimuth according to the corresponding height of the target object in the target image and the actual height of the target object; (2) According to the angle equivalent of the pixel and the actual height of the target object The abscissa of the center point and the lateral resolution of the target image determine the polar angle of the observation orientation.
  • the actual height of the target object in (1) may refer to the physical height of the target object. It can be seen from the foregoing that, in the embodiments of the present invention, the image segmentation technology can be used to segment the target image collected by the camera into at least one object area, and then the characteristic information of each object area is extracted, and the target object is determined according to the characteristic information of each object area. Based on this, the height corresponding to the target object in the target image in (1) above may refer to the height of the target object in the target image; or, assuming that the target object is represented by a rectangular frame, the target object is The corresponding height in the target image may also refer to the height of the rectangular frame in the target image. Optionally, the height corresponding to the target object in the target image may refer to the height of pixels. Exemplarily, the height corresponding to the target object in the target image may be 5 pixels.
  • the specific method of determining the polar diameter of the observation orientation according to the corresponding height of the target object in the target image and the actual height of the target object in (1) above may include: The corresponding height of the target object in the target image and the actual height of the target object are substituted into the polar diameter determination formula for calculation, and the result of the calculation is the polar diameter of the observation azimuth.
  • the formula for determining the polar diameter can be shown in the following formula (2):
  • r represents the polar diameter of the observation azimuth
  • H represents the actual height of the target object
  • h represents the corresponding height of the target object in the target image
  • k represents a constant, which means: when the polar diameter is 1, the target object is at The corresponding height in the target image.
  • the above formula is mainly based on the principle of triangular approximation. With k, h and H known, the polar diameter of the target object relative to the mobile robot can be calculated at the current moment.
  • the angle equivalent of the pixel in step (2) above is used to indicate the conversion relationship between pixels and angles, that is, how large a pixel can be expressed;
  • the center point of the target object may refer to the center of mass of the target object, or It is used to represent the center point of the rectangular frame of the target object;
  • the abscissa of the center point of the target object may be a pixel value or a value in a physical coordinate system.
  • the horizontal coordinate described here in the embodiment of the present invention The coordinates are represented by pixel values; the lateral resolution of the target image refers to how many pixels the target image includes in the abscissa direction.
  • an implementation manner for determining the polar angle of the observation azimuth may be: calculating the angle equivalent of the pixel, the abscissa of the center point of the target object, and the lateral resolution of the target image Substitute it into the polar angle determination formula for calculation, and determine the result of the calculation as the polar angle of the observation azimuth.
  • represents the polar angle of the observation azimuth
  • N ang represents the angular equivalent of the pixel
  • u represents the abscissa of the midpoint of the target object
  • H res represents the lateral resolution of the target image.
  • step S401 may also be: determining the target object according to the target image collected by the camera; The target image and the target object determine the first observation orientation of the current target object relative to the mobile robot; according to the depth image obtained by the TOF sensor and the target object, determine the current moment target object relative to the The second observation orientation of the mobile robot; according to the first observation orientation and the second observation orientation, the current observation orientation of the target object relative to the mobile robot is obtained.
  • TOF Time of Flight
  • the working principle of the TOF sensor is: the TOF sensor emits modulated near-infrared light, which is reflected after encountering an object.
  • the TOF sensor calculates the time difference or phase difference between the emitted near-infrared light and the received reflection to convert the distance of the subject to Generate depth images.
  • the implementation manner of determining the second observation orientation of the target object relative to the mobile robot at the current moment according to the depth image obtained by the TOF sensor and the target object in the embodiment of the present invention may be : Map the target object determined based on the target image or the rectangular frame used to represent the target object to the depth image, so that the second observation position of the target object relative to the mobile robot can be determined in the depth image.
  • the second observation position of the target object relative to the mobile robot determined by the TOF sensor can be expressed in rectangular coordinates or polar coordinates.
  • the first observation position and the second observation position use the same representation.
  • the current observation orientation of the target object relative to the mobile robot can be obtained according to the first observation orientation and the second observation.
  • the implementation of obtaining the observation position of the target object relative to the mobile robot at the current moment according to the first observation position and the second observation position may include: comparing the first observation position and the The second observation orientation performs a weighted average operation, and the obtained operation result is used as the observation orientation of the target object relative to the mobile robot at the current moment.
  • the implementation manner of obtaining the observation orientation of the target object relative to the mobile robot at the current moment according to the first observation orientation and the second observation orientation may further include: comparing the second observation orientation based on a preset fusion model An observation orientation and the second observation orientation are fused to obtain a fusion value; the fusion value is determined as the observation orientation of the target object relative to the mobile robot at the current moment.
  • Step S402 Determine the angular velocity of the target object according to the observation orientation.
  • step S402 may be: determining the predicted position of the target object relative to the mobile robot at the current moment according to the position and speed of the target object relative to the mobile robot at the previous moment; The azimuth and the observation azimuth are subjected to fusion filtering processing; according to the result of the fusion filter processing, the angular velocity of the target object is determined.
  • the above method determines the predicted position of the target object relative to the mobile robot at the current moment based on the position and speed of the target object relative to the mobile robot at the previous moment It may be: the position of the target object relative to the mobile robot at the previous moment is deduced along the velocity direction of the sampling interval time to obtain the predicted orientation of the target object relative to the mobile robot at the current moment.
  • Kalman filter may be selected to perform fusion filtering processing on the predicted azimuth and the observation azimuth. In the following description, the Kalman filter is taken as an example to introduce how to perform fusion processing on the predicted azimuth and the observation azimuth.
  • the predicted azimuth and the observation azimuth need to be expressed in the form of rectangular coordinates. If the predicted orientation and the observation orientation determined above are expressed in polar coordinates, here it is necessary to convert the polar coordinates to rectangular coordinates.
  • the observation orientation of the target object is in polar coordinates. Expressed as (r, ⁇ ), the polar coordinates of the observation azimuth can be converted into rectangular coordinates by the following formula (4):
  • r represents the polar diameter of the observation azimuth in polar coordinates
  • represents the polar angle of the observation azimuth in polar coordinates
  • Px represents the abscissa of the observation azimuth in rectangular coordinates
  • Py represents the observation azimuth in rectangular coordinates. The ordinate in.
  • Kalman filtering is performed on the abscissa direction (denoted as x) and the ordinate direction (denoted as y) respectively. It should be understood that the observation orientation is transformed into rectangular coordinates, the value of the abscissa represents the relative position of the target object relative to the mobile robot in the abscissa direction x (denoted as x1), and the relative position in the abscissa direction x is subjected to differential processing.
  • the relative speed of the target object relative to the mobile robot in the abscissa direction (expressed as x2) can be obtained; similarly, the value of the ordinate of the observation orientation represents the relative position of the target object relative to the mobile robot in the ordinate direction y (expressed as y1). Perform difference processing on the relative position in the ordinate direction to obtain the relative speed of the target object relative to the mobile robot in the ordinate direction y (denoted as y2).
  • Kalman filtering essentially refers to: filtering each state variable included in the abscissa direction and filtering each state variable included in the ordinate direction.
  • the state space model defined by the Kalman filter can be expressed as the following formula (5) and formula (6):
  • x1(k+1) x1(k)+dTx2(k)+W(k) (5)
  • k represents time k
  • k+1 represents time k+1
  • x1(k+1) represents the state of state variable x1 at time k+1
  • x2(k+1) represents the state of state variable x2 at time k+1
  • x1(k) and x2(k) respectively represent the state of state variable x1 and state variable x2 at time k
  • W and V represent predicted noise and observation noise, respectively.
  • the predicted noise can be regarded as the predicted position and the true position.
  • the deviation between the two and the observation noise can be regarded as the deviation between the observation position and the true position.
  • x represents the state variable of the predicted azimuth in the abscissa direction
  • z represents the observed value of the state variable x.
  • A represents the state transition matrix
  • formula (7) represents the prediction of x at k+1 through x at time k
  • k is the covariance corresponding to x k+1
  • k represents the covariance corresponding to x k
  • AT represents the transposition of A
  • Q is the covariance matrix of predicted noise
  • formula (8) represents the predicted covariance of x at k+1 time k+1 by x at time k
  • k represents the Kalman gain
  • R represents the observation noise
  • formula (9) determines the Kalman gain at k+1 through the covariance at k+1, the covariance at k and the observation noise
  • z k+ 1 represents the observed value of state variable x at time k+1
  • k+1 represents the value of state variable x obtained by fusing the predicted value of state variable x at time k+1 with the observed value at time k+1 The best estimate;
  • k+1 of the state variable x in the abscissa direction can be obtained through the above process, and the optimal value y k+1 of the state variable y in the ordinate direction can be obtained using the same process described above
  • the determining the angular velocity of the target object according to the result of the fusion filtering process may include: determining the angular deviation of the target object relative to the mobile robot according to the result of the fusion filtering process; The angular deviation is subjected to differential processing to obtain the angular velocity of the target object.
  • the obtained two components in the abscissa direction and the ordinate direction can be transformed into polar coordinates, the angle deviation is obtained according to the polar coordinates, and then the angle deviation is differentially processed to obtain the target object Angular velocity.
  • Step S403 Determine the motion control parameter according to the angular velocity.
  • the embodiment of the present invention may use a tracking-differentiator as a lead correction superimposed on the angular velocity determined in step S402, and the angular velocity after the tracking-differentiation process may be used as a motion control parameter.
  • the step S403 may include: performing tracking-differentiation processing on the angular velocity to obtain a derivative output and a following output; and determining the motion control parameter according to the derivative output and the following output.
  • the determining the motion control parameter according to the derivative output and the follow-up output may include: determining the derivative gain and the follow-up gain of the tracking-derivative processing; and the result of multiplying the derivative output and the derivative gain, And the result of the multiplication of the follow output and the follow gain to obtain the motion control parameter.
  • a linear tracking-differentiator is used to perform tracking-differentiation processing on the angular velocity.
  • the tracking-differentiation processor has a transfer function as shown in formula (12):
  • This link is a second-order inertial link
  • r is a known parameter
  • s is a variable of the transfer function.
  • u is the signal to be tracked/differentiated, that is, the angular velocity of the target object output by the Kalman filter
  • x1 represents the follow signal of the original signal
  • x2 represents the differential signal of the original signal
  • fst(u,x,k) is the follow function
  • fst(u,x,k) can be expressed as the following formula (15):
  • the differential signal x2 obtained according to the above by the differential gain k2, the follow signal x1 by the follow gain k1, and then add the two multiplied results to obtain the angular velocity of the target object after the tracking-differentiation process.
  • the angular velocity is used as the motion control parameter determined in step S403.
  • Step S404 Determine a motion control error according to the observation orientation and the current orientation of the mobile robot.
  • the angular velocity feedforward is combined with the controller for aiming control, and the controller determines the control feedback result of the controller according to the motion control error and the control law corresponding to the controller.
  • the motion control error refers to the difference between the current orientation and the desired orientation of the mobile robot.
  • the method of obtaining motion control error is: determining the desired orientation of the mobile robot according to the observation orientation of the target object relative to the mobile robot at the current moment; acquiring the current orientation of the mobile robot, and determining the difference between the current orientation and the desired orientation Is motion control error.
  • Step S405 Control the sight to move in the direction of the target object according to the motion control error and the motion control parameter.
  • the implementation manner of controlling the sight to move to the orientation of the target object according to the motion control error and the motion control parameter may be: processing the motion control error according to the preset control law of the controller to obtain control Feedback results; according to the control feedback results and motion control parameters, the sight is controlled to move to the orientation of the target object.
  • the controller may be a PID controller, and the preset control law of the PID controller may be as shown in formula (1).
  • the sight of the mobile robot may be configured on a pan-tilt
  • the controlling the sight to move in the direction of the target object may include: controlling the sight through the pan-tilt Movement in the direction of the target object.
  • controlling the sight to move in the direction of the target object may include: controlling the movement of the mobile robot to drive the sight to move in the direction of the target object. That is to say, in the process of aiming, the pan/tilt can rotate to drive the sight to the direction of the target object, while the mobile robot does not move; or it can be the pan/tilt does not move and control the movement of the mobile robot to drive the sight Move in the direction of the target object; or, alternatively, the mobile robot and the pan/tilt may move at the same time to control the sight to move in the direction of the target object.
  • the mobile robot is controlled to move in the direction of the target object, and the sight is controlled to aim at the target object.
  • an embodiment of the present invention provides an aiming control module as shown in FIG. 5, which may include two parts.
  • the first part is to determine the motion control parameters output by the angular velocity feedforward path, and the second part To determine the control feedback result output by the controller.
  • the observation position of the target object relative to the mobile robot is expressed in polar coordinates
  • the first part convert the polar coordinates to rectangular coordinates; then perform Kalman filtering on the abscissa and ordinate directions of the rectangular coordinates to obtain the angle of the target object.
  • the angular velocity of the target object is determined according to the observation orientation
  • the motion control parameters are determined according to the angular velocity of the target object, and then the observation orientation and movement
  • the current position of the robot determines the motion control error
  • the motion control error and the motion control parameters are used as control signals to control the direction of the sight to the target object.
  • the angular velocity of the target object is used as the feedforward signal for aiming control
  • the motion control error is used as the feedback signal. Both the feedforward signal and the feedback signal are used for aiming control to improve the accuracy of aiming control.
  • the embodiment of the present invention provides a schematic structural diagram of a mobile robot as shown in FIG. 6.
  • the mobile robot as shown in FIG. 6 may include a memory 601, a processor 602, and a sight 603, where the memory 601, the processor 602, and the sight 603 are connected by a bus 604, the memory 601 stores ordered codes, and the processor 602 calls the memory The program code in 601.
  • the memory 601 may include volatile memory (volatile memory), such as random-access memory (RAM); the memory 601 may also include non-volatile memory (non-volatile memory), such as flash memory (flash memory), solid-state drive (solid-state drive, SSD), etc.; the memory 6011 may also include a combination of the foregoing types of memories.
  • volatile memory volatile memory
  • non-volatile memory non-volatile memory
  • flash memory flash memory
  • solid-state drive solid-state drive
  • SSD solid-state drive
  • the processor 602 may be a central processing unit (Central Processing Unit, CPU).
  • the processor 602 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), etc.
  • the PLD may be a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), etc.
  • the processor 602 may also be a combination of the foregoing structures.
  • the memory 601 is used to store a computer program, and the computer program includes program instructions.
  • the processor 602 is used to execute the program instructions stored in the memory 601 to implement the implementation shown in FIGS. 2 and 4 above. The steps of the corresponding method in the example.
  • the processor 602 is configured to execute when the program instruction is called: obtain the observation orientation of the target object relative to the mobile robot at the current moment; determine the angular velocity of the target object according to the observation orientation; The motion control parameter is determined according to the angular velocity, and the motion control parameter is used to control the sight to move in the direction of the target object.
  • the sight includes a camera
  • the processor 602 performs the following operations when acquiring the observation position of the target object relative to the mobile robot at the current moment: according to the target image collected by the camera, determine The target object; according to the target image and the target object, determine the observation orientation of the target object relative to the mobile robot at the current moment.
  • the observation orientation is expressed in polar coordinates, and the observation is based on the target image and the target object.
  • the processor 602 obtains the observation orientation of the target object relative to the mobile robot at the current moment, it performs the following operations: Determine according to the corresponding height of the target object in the target image and the actual height of the target object The polar diameter of the observation azimuth; the polar angle of the observation azimuth is determined according to the angular equivalent of pixels, the abscissa of the center point of the target object, and the lateral resolution of the target image.
  • the sight includes a camera and a time-of-flight TOF sensor
  • the processor 602 performs the following operations when acquiring the observation position of the target object relative to the mobile robot at the current moment: Determine the target object according to the target image and the target object; determine the first observation orientation of the target object relative to the mobile robot at the current moment; according to the depth image obtained by the TOF sensor and the The target object determines the second observation orientation of the target object relative to the mobile robot at the current moment; and obtains the current observation orientation of the target object relative to the mobile robot according to the first observation orientation and the second observation orientation.
  • the processor 602 performs the following operations when determining the angular velocity of the target object according to the observation orientation: according to the position and velocity of the target object relative to the mobile robot at the previous moment, Determine the predicted azimuth of the target object relative to the mobile robot at the current moment; perform fusion filtering processing on the predicted azimuth and the observation azimuth; and determine the angular velocity of the target object according to the result of the fusion filtering processing.
  • the fusion filter processing includes Kalman filter processing.
  • the processor 602 when determining the angular velocity of the target object according to the result of the fusion filtering process, performs the following operations: according to the result of the fusion filtering process, it is determined that the target object is relative to The angular deviation of the mobile robot; differential processing is performed on the angular deviation to obtain the angular velocity of the target object.
  • the processor 602 when the processor 602 determines the motion control parameter according to the angular velocity, it performs the following operations: performs tracking-differentiation processing on the angular velocity to obtain differential output and follow-up output; The following output determines the motion control parameter.
  • the processor 602 when the processor 602 determines the motion control parameter according to the derivative output and the follow-up output, it performs the following operations: determines the derivative gain and the follow-up gain of the tracking-derivative processing; The result of the multiplication of the differential output and the differential gain is added to the result of the multiplication of the follow output and the follow gain to obtain the motion control parameter.
  • the processor 602 when the processor 602 is configured to call the program instructions, it also executes: determining the motion control error according to the observation orientation and the current orientation of the mobile robot; according to the motion control error and the motion The control parameter controls the sight to move in the direction of the target object.
  • the sight includes a pan-tilt
  • the processor 602 performs the following operations when controlling the sight to move in the direction of the target object: controlling the sight through the pan-tilt Movement in the direction of the target object.
  • the processor 602 when the processor 602 controls the sight to move in the direction of the target object, it performs the following operations: by controlling the movement of the mobile robot, the processor 602 drives the sight to the target. The direction of the object.
  • the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

一种瞄准控制方法,包括:获取当前时刻目标对象相对于移动机器人的观测方位;根据观测方位,确定目标对象的角速度;根据角速度确定运动控制参数,运动控制参数用于控制瞄准器向目标对象的方向进行运动。该方法可以提高瞄准控制的准确性。还公开了一种移动机器人及计算机可读存储介质。

Description

一种瞄准控制方法、移动机器人及计算机可读存储介质 技术领域
本发明涉及电子技术领域,尤其涉及一种瞄准控制方法、移动机器人及计算机可读存储介质。
背景技术
在移动机器人领域中,自动瞄准技术是指:在目标对象和/或移动机器人运动过程中,控制目标对象与移动机器人的相对方位处于期望值的一种伺服控制技术。目前,常用的瞄准控制方法是基于误差设计的,可通过图像传感器获取当前时刻目标对象相对于移动机器人的方位,进一步的,基于该方位确定期望方位与移动机器人的实际方位之间的差值,并基于所述差值和预设控制律生成对移动机器人的控制策略。经实践证明,上述方法存在稳态误差大、抗噪声能力差、控制滞后等缺陷,导致对移动机器人的瞄准控制不准确。
发明内容
本发明实施例提供了一种瞄准控制方法、移动机器人及计算机可读存储介质,能够提高瞄准控制的准确性。
第一方面,本发明实施例提供了一种瞄准控制方法,所述方法应用于移动机器人,所述移动机器人包括瞄准器,所述方法包括:
获取当前时刻目标对象相对于所述移动机器人的观测方位;
根据所述观测方位,确定所述目标对象的角速度;
根据所述角速度确定运动控制参数,所述运动控制参数用于控制所述瞄准器向所述目标对象的方向进行运动。
第二方面,本发明实施例提供了一种移动机器人,包括:瞄准器、存储器和处理器:
所述存储器,用于存储程序代码;
所述处理器,用于调用所述程序代码,当所述程序代码被执行时,用于执行以下操作:
获取当前时刻目标对象相对于所述移动机器人的观测方位;
根据所述观测方位,确定所述目标对象的角速度;
根据所述角速度确定运动控制参数,所述运动控制参数用于控制所述瞄准器向所述目标对象的方向进行运动。
第三方面,本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令被执行时用于实现上述的第一方面所述的瞄准控制方法。
本发明实施例中,在获取到当前时刻目标对象相对于移动机器人的观测方位时,根据观测方位确定目标对象的角速度;再根据所述目标对象的角速度确定用于控制瞄准器向所述目标对象的方向进行运动的运动控制参数。在上述瞄准控制过程中,将目标对象的角速度作为瞄准控制的前馈信号,与现有技术中仅基于误差进行瞄准控制相比,可以准确地控制瞄准器沿着目标对象的方向进行运动,提高了瞄准控制的准确性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种自动瞄准的示意图;
图2a为现有技术提供的一种瞄准控制系统的示意图;
图2b为本发明实施例提供的一种瞄准控制系统的示意图;
图3为本发明实施例提供的一种瞄准控制方法的流程图;
图4为本发明实施例提供的另一种瞄准控制方法的流程示意图;
图5为本发明实施例提供的一种瞄准控制模块的示意图;
图6为本发明实施例提供的一种移动机器人的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
如图1所示,图1为本发明实施例提供的一种自动瞄准的示意图,在图1中,101表示目标对象,102表示移动机器人,所述目标对象可以包括移动机器人设备或者其他地面固定设备;103表示移动机器人上配置的瞄准器,箭头表示移动机器人的移动方向。自动瞄准就是指控制移动机器人与目标对象之间的相对方位处于期望值,以使得移动机器人可以瞄准并命中目标对象。
在瞄准控制系统中,通常采用的瞄准控制方法是将运动控制误差作为瞄准控制的反馈信号,基于此反馈信号来生成控制策略以控制瞄准器运动。如图2a,图2a为现有技术提供的一种瞄准控制系统的示意图。在图2a所示的瞄准控制系统中可包括目标对象、移动机器人(例如轮式移动机器人)、检测器、跟踪器、控制器和执行器。所述检测器、所述跟踪器、所述控制器和所述执行器组成了瞄准控制系统中的自动瞄准模块。
在图2a所述的瞄准控制系统中,所述移动机器人的图像传感器可以采集包括目标对象的当前时刻图像,并将所述当前时刻图像传递到自动瞄准模块;响应于接收到的当前时刻图像,自动瞄准模块执行如下操作:检测器和跟踪器对所述当前时刻图像进行处理,确定出当前时刻目标对象相对于移动机器人的观测方位,然后将观测方位输入到控制器;控制器可以根据所述观测方位确定轮式移动机器人的理想方位,根据所述理想方位和轮式移动机器人的当前方位,确定运动控制误差,进一步的,根据所述运动控制误差和控制器的控制律生成控制反馈结果,并将所述控制反馈结果传递给执行器,所述执行器基于控制反馈结果对轮式移动机器人进行瞄准控制。
在执行器按照控制反馈结果进行瞄准时,移动机器人和目标对象之间的相对方位可能发生改变,图像传感器进一步采集变化后的图像,传递到自动瞄准模块以重复执行上述过程,由此形成闭环反馈。
可选的,图2a所述控制器可以为比例-积分-微分(Proportional-Integral-Differential,PID)控制器,所述将运动控制误差代入控制器的控制律进行运算以得到控制结果,可以通过公式(1)表示,在公式(1)中假设图2a中所用的控制器为PID控制器:
Figure PCTCN2019085245-appb-000001
在公式(1)中,e表示误差,K p表示比例系数,K i表示积分系数,K d表示微分系数,K p,K i以及K d用以调整当前误差、累计误差和误差变化对控制器的输出的影响程度。
由此,PID控制器构成了一个超前-滞后校正器,通过调整比例系数、积分系数和微分系数,可以使得瞄准控制系统的运动控制误差保持在一定范围内,然而基于PID控制器的控制方法有几个缺陷。其一是可能存在无法消除的稳态误差;其二是抗噪声能力差,PID控制器的微分环节具有+20db/dec的全局幅频特性,意味着系统的高频噪声将被微分环节放大,导致系统输出震颤;其三,PID控制器是基于误差的反馈控制器,即一定需要误差出现后才能得到控制器的输出,使其输出永远滞后于误差变化。经实践证明,基于图2a所述的瞄准控制系统进行瞄准控制时,移动机器人对目标对象的命中率通常情况下小于10%。
为了解决上述问题,本发明实施例提出了一种基于角速度前馈的瞄准控制系统,如图2b所示。与图2a中所述瞄准控制系统相同的:在图2b所述的瞄准控制系统中,也可包括检测器、跟踪器、控制器和检测器;与图2a中不同的是:图2b所述的瞄准控制系统中,引入了一条角速度前馈通路,所述角速度前馈通路和控制器分别对检测器和跟踪器确定出的观测方位进行处理,得到角速度前馈结果和控制器的控制反馈结果,最后将角速度前馈结果和控制反馈结果进行融合并输出给执行器,实现更加准确的瞄准控制。
经分析,图2b所述的瞄准控制系统通过引入角速度前馈,并与控制器结合可提高瞄准控制的准确性。经实践证明,通过引入角速度前馈,移动机器人对目标对象的命中率可达50%以上。
在一种实施方式中,角速度前馈输出也可以单独作为执行器的输入。
本发明实施例提供了一种瞄准控制方法如图3所示。所述瞄准控制方法可以应用于图2b所示的瞄准控制系统中,所述瞄准控制方法可应用于移动机器人,所述移动机器人包括瞄准器。图3所述的瞄准控制方法可以由移动机器人执行,具体地,可以由移动机器人的处理器执行,图3所述的瞄准控制方法可包括如下步骤:
步骤S301、获取当前时刻目标对象相对于移动机器人的观测方位。
其中,所述目标对象是指待瞄准对象。所述观测方位包括当前时刻目标对象相对于移动机器人的方向和距离。在一个实施例中,所述目标对象相对于移动机器人的观测方位可以指:目标对象上任意一点到移动机器人上任意一点之间的相对方向和距离。示例性地,假设所述移动机器人的瞄准器上包括摄像头,所述目标对象相对于所述移动机器人的观测方位可以指:摄像头的coms传感器中心点与所述目标对象所在成像平面中心点的相对方向和距离。
再一个实施例中,所述目标对象相对于所述移动机器人的观测方位还可以指:移动机器人的质心与所述目标对象的质心之间的相对方向和距离。应当理解的,上述只是本发明实施例列举的两种观测方位的定义方式,在其他实施例中,本领域技术人员可根据实际需求设定所述观测方位的定义方式。
应当理解的,若想要获取到目标对象相对于移动机器人的观测方位,首先需要确定出目标对象。在一个实施例中,移动机器人的瞄准器上可包括摄像头,可以采用摄像头采集当前时刻包含目标对象的目标图像,再采用图像分割或者深度学习等技术对目标图像进行处理以确定出目标对象。示例性地,采用图像分割确定目标图像中包括的目标对象是指:按照预设的分割规则将目标图像分割成至少一个物体区域;对分割后的每个物体区域进行特征提取以获得每个物体区域的特征信息;再将每个物体区域的特征信息与预先存储的或者预先确定的目标对象的特征信息进行对比以判断该物体区域是否为目标对象,如果是,则将该物体区域确定为包含目标对象的目标物体区域。基于上述过程便可确定出目标图像中包括的目标对象。
在一个实施例中,基于图2b所示的瞄准控制系统,本发明实施例可调用检测器和跟踪器执行上述采用图像分割技术确定目标图像中包括的目标对象的步骤,具体地:检测器可采用图像分割技术将目标对象从目标图像中分割出来,所述目标对象可以以矩形框形式表示;然后将所述矩形框传递给跟踪器;跟踪器根据接收到的矩形框内的图像颜色、梯度等信息,再附加所述矩形框的历史位置、尺寸等信息,融合出噪声较小的矩形框,所述融合出来的矩形框则可用于表示目标对象。
在一个实施例中,在确定出目标对象之后,便可以根据目标对象在目标图 像中的一些信息以及目标图像的信息确定所述目标对象相对于所述移动机器人的观测方位。关于具体如何根据上述信息确定目标对象相对于所述移动机器人的观测方位将在后面进行瞄准。
步骤S302、根据所述观测方位,确定目标对象的角速度。
在一个实施例中,检测器中会包含一定噪声,因此,根据检测器和跟踪器确定出来的目标对象可能不够准确,这样,基于所述目标对象确定出目标对象相对于所述移动机器人的观测方位可能也存在由噪声引起的误差。如果直接根据所述观测方位确定角速度会导致最后根据角度速生成的运动控制参数存在误差,从而影响瞄准控制的准确性。因此,在步骤S302中根据观测方位确定目标对象的角度速时,可以首先对观测方位进行滤波处理,以消除观测方位中包括的误差。
在一个实施例中,由于目标对象的运动具有位置和速度的连续性,也即目标对象下一时刻的位置和速度不会与上一时刻的位置和速度偏差较大;为了进一步提高步骤S302中确定的角速度的准确性,在根据观测方位确定角速度时,还可以根据上一时刻目标对象相对于移动机器人的位置和速度推算出当前时刻目标对象相对于移动机器人的预测方位;再将预测方位和观测方位进行融合滤波处理得到目标对象的角速度。对于具体的将预测方位和观测方位进行融合滤波处理的实施方式将在后面具体介绍。
步骤S303、根据所述角速度确定运动控制参数。
所述运动控制参数用于控制所述移动机器人的瞄准器向所述目标对象的方向进行运动,所述运动控制参数可以包括角速度。在一个实施例中,前述调用检测器确定目标对象的过程以及步骤S302中根据观测方位确定角速度的过程耗时较长,导致角速度前馈通路的整体时延较大,若直接将步骤S302中确定的角速度作为运动控制参数输出给执行器以控制移动机器人运动会引起振荡。为了避免此问题,本发明实施例在通过步骤S302得到目标对象的角度速之后,采用相关技术手段对步骤S302的角速度进行处理以达到加快响应的目的。所述相关技术手段可包括跟踪-微分处理,所述跟踪-微分处理是采用一个跟踪-微分器作为超前矫正叠加在角速度上,叠加后的角速度作为运动控制参数。
本发明实施例中在获取到当前时刻目标对象相对于移动机器人的观测方 位时,根据观测方位确定目标对象的角速度;再根据所述目标对象的角速度确定出用于控制瞄准器向所述目标对象的方向进行运动的运动控制参数。在上述瞄准控制过程中,将目标对象的角速度作为瞄准控制的前馈信号,基于所述前馈信号确定出的运动控制参数可以准确地控制瞄准器沿着目标对象的方向进行运动,提高了瞄准控制的准确性。
请参考图4,为本发明实施例提供的另一种瞄准控制方法,所述瞄准控制方法可应用于图2所示的瞄准控制系统中,所述瞄准控制方法应用于移动机器人,所述移动机器人包括瞄准器,图4所述的瞄准控制方法可包括如下步骤:
步骤S401、获取当前时刻目标对象相对于所述移动机器人的观测方位。
在一个实施例中,所述步骤S401的实施方式可以为:根据所述摄像头采集到的目标图像,确定所述目标对象;根据所述目标图像和所述目标对象,确定当前时刻所述目标对象相对于所述移动机器人的观测方位。其中,所述当前时刻所述目标对象相对于所述移动机器人的观测方位可以是以直角坐标的形式表示的,也可以是以极坐标的形式表示的。示例性地,如果所述观测方位是以极坐标形式表示的,则所述根据所述目标图像和所述目标对象,确定当前目标对象相对于所述移动机器人的观测方位可包括如下步骤:(1)根据所述目标对象在所述目标图像中对应的高度和所述目标对象的实际高度,确定所述观测方位的极径;(2)根据所述像素的角度当量、所述目标对象的中心点的横坐标以及所述目标图像的横向分辨率,确定所述观测方位的极角。
在(1)中所述目标对象的实际高度可以是指目标对象的物理高度。由前述可知,本发明实施例中可以采用图像分割技术将摄像头采集到的目标图像分割出至少一个物体区域,然后提取各个物体区域的特征信息,进而根据各个物体区域的特征信息确定出目标对象。基于此,上述(1)中所述目标对象在所述目标图像中对应的高度可以指目标对象在目标图像中的高度;或者,假设目标对象是以矩形框形式表示的,所述目标对象在所述目标图像中对应的高度也可以指矩形框在所述目标图像中的高度。可选的,所述目标对象在所述目标图像中对应的高度可以指像素高度,示例性地,目标对象在目标图像中对应的高度可以为5像素。
作为一种可行的实施例方式,上述(1)中所述根据目标对象在所述目标图像中对应的高度和所述目标对象的实际高度确定所述观测方位的极径的具体方式可包括:将所述目标对象在所述目标图像中对应的高度和所述目标对象的实际高度代入极径确定公式进行运算,运算所得结果即为观测方位的极径。例如,极径确定公式可如下公式(2)所示:
Figure PCTCN2019085245-appb-000002
其中,r表示观测方位的极径,H表示目标对象的实际高度,h表示目标对象在所述目标图像中对应的高度;k表示一个常数,其含义为:当极径为1时目标对象在所述目标图像中对应的高度。上述公式主要依据三角形近似相似原理,在已知k,h和H的情况下,可计算出当前时刻目标对象相对于移动机器人的极径。
上述(2)步骤中所述像素的角度当量用于表示像素与角度之间的换算关系,也即一个像素可以表示为多大角度;所述目标对象的中心点可以指目标对象的质心,或者指用于表示目标对象的矩形框的中心点;所述目标对象的中心点的横坐标可以是像素值也可以是物理坐标系中的数值,为了方便计算,本发明实施例中此处所述横坐标用像素值来表示;所述目标图像的横向分辨率是指所述目标图像在横坐标方向上包括多少个像素。具体地,所述步骤(2)确定所述观测方位的极角的实施方式可以为:将所述像素的角度当量、所述目标对象的中心点的横坐标以及所述目标图像的横向分辨率代入到极角确定公式中进行运算,将运算所得的结果确定为观测方位的极角。
例如,极角的计算公式可如下公式(3)所示:
Figure PCTCN2019085245-appb-000003
其中,θ表示观测方位的极角,N ang表示像素的角度当量,u表示目标对象的中间点的横坐标,H res表示目标图像的横向分辨率。
在其他实施例中,如果瞄准器还包括飞行时间(Time of Flight,TOF)传感器,所述步骤S401的实现方式还可以是:根据所述摄像头采集到的目标图像,确定所述目标对象;根据所述目标图像和所述目标对象,确定当前目标对 象相对于所述移动机器人的第一观测方位;根据所述TOF传感器获得的深度图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第二观测方位;根据所述第一观测方位和所述第二观测方位,得到当前目标对象相对于所述移动机器人的观测方位。
所述TOF传感器的工作原理是;TOF传感器发出经调制的近红外光,遇物体后反射,TOF传感器通过计算发射近红外光和接收到反射的时间差或者相位差,来换算被拍摄对象的距离以产生深度图像。基于所述TOF传感器的工作原理,本发明实施例中所述根据TOF传感器获得的深度图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第二观测方位的实施方式可以为:将基于所述目标图像确定出来的目标对象或者用于表示目标对象的矩形框映射到深度图像中,由此在深度图像中便可确定出目标对象相对于移动机器人的第二观测方位。应当理解的,通过TOF传感器确定的目标对象相对于移动机器人的第二观测方位可以用直角坐标形式表示,也可以用极坐标形式表示,为了方便将第一观测方位和第二观测方位融合,第一观测方位和第二观测方位采用相同的表示形式。
通过上述方法确定出目标对象相对于移动机器人的第一观测方位和第二观测方位之后,可以根据第一观测方位和所述第二观测得到当前时刻目标对象相对于移动机器人的观测方位。在一个实施例中,所述根据第一观测方位和所述第二观测方位得到当前时刻目标对象相对于所述移动机器人的观测方位的实施方式可包括:对所述第一观测方位和所述第二观测方位进行加权平均运算,得到的运算结果作为当前时刻目标对象相对于所述移动机器人的观测方位。在其他实施例中,所述根据第一观测方位和所述第二观测方位得到当前时刻目标对象相对于所述移动机器人的观测方位的实施方式还可以包括:基于预设融合模型对所述第一观测方位和所述第二观测方位进行融合处理得到融合值;将所述融合值确定为当前时刻目标对象相对于所述移动机器人的观测方位。
步骤S402、根据所述观测方位,确定所述目标对象的角速度。
由前述可知,所述步骤S402的实施方式可以为:根据上一时刻所述目标对象相对于移动机器人的位置和速度,确定当前时刻所述目标对象相对于移动机器人的预测方位;对所述预测方位和所述观测方位进行融合滤波处理;根据 所述融合滤波器处理的结果,确定所述目标对象的角速度。
应当理解的,由于目标对象的运动具有位置和速度连续性,上述根据上一时刻所述目标对象相对于移动机器人的位置和速度,确定当前时刻所述目标对象相对于移动机器人的预测方位的方式可以为:将上一时刻所述目标对象相对于移动机器人的位置沿着速度方向推演采样间隔时间得到当前时刻目标对象相对于移动机器人的预测方位。本发明实施例可以选择卡尔曼滤波对所述预测方位和所述观测方位进行融合滤波处理,下面描述中以卡尔曼滤波器为例介绍如何对预测方位和观测方位进行融合处理。
在采用卡尔曼滤波器对所述预测方位和所述观测方位进行融合处理时,为了将平面运动解耦,需要将所述预测方位和所述观测方位以直角坐标形式表示。如果前述确定的所述预测方位和所述观测方位是以极坐标形式表示的,此处需要先将极坐标形式转换为直角坐标形式,示例性地,假设目标对象的观测方位是以极坐标形式表示的如(r,θ),可以通过如下公式(4)将观测方位的极坐标转换为直角坐标形式:
Figure PCTCN2019085245-appb-000004
其中,公式(4)中r表示观测方位在极坐标中的极径,θ表示观测方位在极坐标中的极角,Px表示观测方位在直角坐标中的横坐标,Py表示观测方位在直角坐标中的纵坐标。
变换为直角坐标后,对横坐标方向(表示为x)和纵坐标方向(表示为y)分别进行卡尔曼滤波。应当理解的,将观测方位变换为直角坐标,横坐标的值表示横坐标方向x上目标对象相对于移动机器人的相对位置(表示为x1),对横坐标方向x上的相对位置进行差分处理便可得到横坐标方向上目标对象相对于移动机器人的相对速度(表示为x2);同理的,观测方位的纵坐标的值表示纵坐标方向y上目标对象相对于移动机器人的相对位置(表示为y1),对纵坐标方向上的相对位置进行差分处理得到纵坐标方向y上目标对象相对于移动机器人的相对速度(表示为y2)。基于上述可知,x方向上包括的状态变量为x=(x1,x2) T,y方向上包括的状态变量为y=(y1,y2) T,所述对横坐标方向和纵坐标方向分别进行卡尔曼滤波实质上是指:对横坐标方向上包括的各个 状态变量进行滤波和对纵坐标方向上包括的各个状态变量进行滤波。
卡尔曼滤波器定义的状态空间模型可以表示为如下公式(5)和公式(6)所示:
x1(k+1)=x1(k)+dTx2(k)+W(k)         (5)
x2(k+1)=x2(k)+V(k)               (6)
其中,k表示k时刻,k+1表示k+1时刻,x1(k+1)表示状态变量x1在k+1时刻的状态,x2(k+1)表示状态变量x2在k+1时刻的状态,x1(k)和x2(k)分别表示状态变量x1和状态变量x2在k时刻的状态,W和V分别表示预测噪声和观测噪声,所述预测噪声可以看做预测方位和真实方位之间的偏差,以及所述观测噪声可以看做观测方位和真实方位之间的偏差。
下面以对横坐标方向进行卡尔曼滤波为例,具体介绍对预测方位和观测方位进行融合滤波的步骤。在下面的描述中x表示预测方位在横坐标方向的状态变量,z表示状态变量x的观测值。对于状态变量x,进行卡尔曼滤波,k时刻到k+1时刻的迭代过程可以表示为如下公式(7)-(11):
x k+1|k=Ax k|k                   (7)
P k+1|k=AP k|kA T+Q             (8)
K k+1|k=P k+1|k(P k+1|k+R) -1              (9)
x k+1|k+1=x k+1|k+K k+1|k(z k+1-x k+1|k)       (10)
P k+1|k+1=(1-K k+1|k)P k+1|k            (11)
在上述公式中,A表示状态转移矩阵,公式(7)表示通过k时刻的x预测k+1时刻的x;P k+1|k为x k+1|k对应的协方差,P k|k表示x k|k对应的协方差,A T表示A的转置,Q为预测噪声的协方差矩阵,公式(8)表示通过k时刻的x预测k+1时刻的x的预测协方差;K k+1|k表示卡尔曼增益,R表示观测噪声,公式(9)通过k+1时刻的协方差和k时刻的协方差以及观测噪声确定k+1时刻的卡尔曼增益;z k+1表示状态变量x在k+1时刻的观测值,x k+1|k+1表示将k+1时刻状态变量x的预测值与观测值进行融合得到的状态变量x在k+1时刻的最优估计值;P k+1|k+1表为x k+1|k+1对应的协方差,以为下次递推做准备。
可见,通过上述过程可以得到横坐标方向上的状态变量x的最优值x k+1|k+1,采用上述相同的过程对纵坐标方向上的状态变量y的最优值y k+1|k+1,上述 x k+1|k+1和y k+1|k+1即为滤波处理的融合滤波结果。进而根据融合滤波结果便可确定目标对象的角速度。
在一个实施例中,所述根据融合滤波处理的结果,确定目标对象的角速度,可包括:根据所述融合滤波处理的结果,确定所述目标对象相对于所述移动机器人的角度偏差;将所述角度偏差进行差分处理,得到所述目标对象的角速度。在一个实施例中,可以将得到的两个横坐标方向的分量和纵坐标方向的分量,变换为极坐标,根据极坐标得到角度偏差,再对角度偏差进行差分处理,便可得到目标对象的角速度。
步骤S403、根据所述角速度确定运动控制参数。
由前述可知,为了达到加快响应的目的,本发明实施例可采用跟踪-微分器作为超前矫正叠加在通过步骤S402确定出的角速度上,可以将经过跟踪-微分处理后的角速度作为运动控制参数。
具体地,所述步骤S403可包括:对所述角速度进行跟踪-微分处理,得到微分输出和跟随输出;根据所述微分输出和所述跟随输出,确定所述运动控制参数。所述根据所述微分输出和所述跟随输出确定所述运动控制参数可包括:确定所述跟踪-微分处理的微分增益和跟随增益;将所述微分输出与所述微分增益相乘的结果,和所述跟随输出与所述跟随增益相乘的结果相加,得到所述运动控制参数。
可选的,采用线性跟踪-微分器对角度速进行跟踪-微分处理,跟踪-微分处理器具有如公式(12)的传递函数:
Figure PCTCN2019085245-appb-000005
该环节为一个二阶惯性环节,r为已知参数,s为传递函数的变量。通过选取适当状态变量,假设为x=(x1,x2) T将传递函数转化至状态空间,可以得到带有一阶滤波和一阶微分的跟踪-微分器,如公式(13)和(14)所示:
x1(k+1)=x1(k)+dTx2(k)           (13)
x2(k+1)=x2(k)+dTfst(u,x,k)          (14)
其中,u为待跟踪/微分的信号,即为卡尔曼滤波输出的目标对象的角速度,x1表示原信号的跟随信号,x2表示原信号的微分信号,fst(u,x,k)为跟随函 数,是根据公式(12)确定的,fst(u,x,k)可以表示为如下公式(15):
Figure PCTCN2019085245-appb-000006
将根据上述得到的微分信号x2乘以微分增益k2,跟随信号x1乘以跟随增益k1,然后将两个相乘的结果相加,即得到经跟踪-微分处理后的目标对象的角速度,将所述角速度作为通过步骤S403确定的运动控制参数。
步骤S404、根据所述观测方位和所述移动机器人的当前方位确定运动控制误差。
由前述可知,本发明实施例中引入角速度前馈与控制器结合进行瞄准控制,所述控制器进行瞄准控制时是根据运动控制误差和控制器对应的控制律确定控制器的控制反馈结果的。所述运动控制误差是指移动机器人的当前方位和期望方位之间的差值。运动控制误差的获取方式为:根据当前时刻目标对象相对于移动机器人的观测方位确定移动机器人的期望方位;获取移动机器人的当前方位,将所述当前方位与所述期望方位之间的差值确定为运动控制误差。
步骤S405、根据所述运动控制误差和所述运动控制参数,控制所述瞄准器向所述目标对象的方向进行运动。
所述根据运动控制误差和所述运动控制参数,控制所述瞄准器向所述目标对象的方位进行运动的实施方式可以为:根据控制器的预设控制律对运动控制误差进行处理,得到控制反馈结果;根据控制反馈结果和运动控制参数控制瞄准器向所述目标对象的方位进行运动。所述控制器可以为PID控制器,所述PID控制器的预设控制律可如公式(1)所示。
在一个实施例中,所述移动机器人的瞄准器可以配置在云台上,所述控制所述瞄准器向所述目标对象的方向进行运动,可包括:通过所述云台控制所述瞄准器向所述目标对象的方向进行运动。
在其他实施例中,上述控制所述瞄准器向所述目标对象的方向进行运动,可包括:通过控制所述移动机器人的移动,带动所述瞄准器向所述目标对象的方向移动。也就是说,在瞄准的过程中,可以是云台转动来带动瞄准器向目标对象的方向运动,而移动机器人不动;或者,可以是云台不动,通过控制移动机器人运动来带动瞄准器向所述目标对象的方向运动;再或者,可以移动机器 人和云台同时运动来控制瞄准器向目标对象的方向移动。示例性地,当目标对象和移动机器人的距离超过距离阈值时,控制移动机器人向目标对象的方向移动,同时控制瞄准器瞄准目标对象。
基于步骤S401-S405的描述,本发明实施例提供一种瞄准控制模块如图5所示,在图5中可包括两部分,第一部分为确定角速度前馈通路输出的运动控制参数,第二部分为确定控制器输出的控制反馈结果。假设目标对象相对于移动机器人的观测方位以极坐标形式表示,针对第一部分:将极坐标转换为直角坐标;然后对直角坐标的横坐标方向和纵坐标方向分别进行卡尔曼滤波得到目标对象的角度速;再通过跟踪-微分器对角速度进行跟踪-微分处理,得到跟踪输出和微分输出;分别将跟踪输出和微分输出与跟踪增益和微分增益相乘,得到相乘的结果后相加,得到角速度前馈通路输出的运动控制参数;针对第二部分:根据移动机器人的当前方位和期望方位确定运动控制误差;然后将运动控制误差输入到控制器对应的预设控制律中,便可得到控制器的控制反馈结果。
本发明实施例中在获取到当前时刻目标对象相对于移动机器人的观测方位时,根据观测方位确定目标对象的角速度,再根据所述目标对象的角速度确定出运动控制参数,再根据观测方位和移动机器人的当前方位确定出运动控制误差,最后运动控制误差和运动控制参数同时作为控制信号来控制瞄准器向目标对象的方向运动。在上述瞄准控制过程中,将目标对象的角速度作为瞄准控制的前馈信号,运动控制误差作为反馈信号,将前馈信号和反馈信号同时用于瞄准控制提高了瞄准控制的准确性。
基于上述图3和图4描述的方法实施例,本发明实施例提供了一种移动机器人的结构示意图如图6所示。如图6所述的移动机器人可包括:存储器601处理器602和瞄准器603,其中存储器601、处理器602和瞄准器603通过总线604连接,存储器601中存储有序代码,处理器602调用存储器601中的程序代码。
所述存储器601可以包括易失性存储器(volatile memory),如随机存取存储器(random-access memory,RAM);存储器601也可以包括非易失性存储器(non-volatile memory),如快闪存储器(flash memory),固态硬盘(solid-state  drive,SSD)等;存储器6011还可以包括上述种类的存储器的组合。
所述处理器602可以是中央处理器(Central Processing Unit,CPU)。所述处理器602还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)等。该PLD可以是现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)等。所述处理器602也可以为上述结构的组合。
本发明实施例中,所述存储器601用于存储计算机程序,所述计算机程序包括程序指令,处理器602用于执行存储器601存储的程序指令,用来实现上述图2和图4所示的实施例中的相应方法的步骤。
在一个实施例中,所述处理器602被配置调用所述程序指令时执行:获取当前时刻目标对象相对于所述移动机器人的观测方位;根据所述观测方位,确定所述目标对象的角速度;根据所述角速度确定运动控制参数,所述运动控制参数用于控制所述瞄准器向所述目标对象的方向进行运动。
在一个实施例中,所述瞄准器包括摄像头,所述处理器602在获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:根据所述摄像头采集到的目标图像,确定所述目标对象;根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的观测方位。
在一个实施例中,所述观测方位以极坐标形式表示,所述根据所述目标图像和所述目标对象。所述处理器602在获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:根据所述目标对象在所述目标图像中对应的高度和所述目标对象的实际高度,确定所述观测方位的极径;根据像素的角度当量、所述目标对象的中心点的横坐标以及所述目标图像的横向分辨率,确定所述观测方位的极角。
在一个实施例中,所述瞄准器包括摄像头和飞行时间TOF传感器,所述处理器602在获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:根据所述摄像头采集到的目标图像,确定所述目标对象;根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第一观测方位;根据所述TOF传感器获得的深度图像和所述目标对象,确定 当前时刻目标对象相对于所述移动机器人的第二观测方位;根据所述第一观测方位和所述第二观测方位,得到当前时刻目标对象相对于所述移动机器人的观测方位。
在一个实施例中,所述处理器602在根据所述观测方位,确定所述目标对象的角速度时,执行如下操作:根据上一时刻所述目标对象相对于所述移动机器人的位置和速度,确定当前时刻所述目标对象相对于所述移动机器人的预测方位;对所述预测方位和所述观测方位进行融合滤波处理;根据所述融合滤波处理的结果,确定所述目标对象的角速度。
在一个实施例中,融合滤波处理包括卡尔曼滤波处理。
在一个实施例中,所述处理器602在根据所述融合滤波处理的结果,确定所述目标对象的角速度时,执行如下操作:根据所述融合滤波处理的结果,确定所述目标对象相对于所述移动机器人的角度偏差;将所述角度偏差进行差分处理,得到所述目标对象的角速度。
在一个实施例中,所述处理器602在根据所述角速度确定运动控制参数时,执行如下操作:对所述角速度进行跟踪-微分处理,得到微分输出和跟随输出;根据所述微分输出和所述跟随输出,确定所述运动控制参数。
在一个实施例中,所述处理器602在根据所述微分输出和所述跟随输出,确定所述运动控制参数时,执行如下操作:确定所述跟踪-微分处理的微分增益和跟随增益;将所述微分输出与所述微分增益相乘的结果,和所述跟随输出与所述跟随增益相乘的结果相加,得到所述运动控制参数。
在一个实施例中,所述处理器602被配置调用所述程序指令时还执行:根据所述观测方位和所述移动机器人的当前方位确定运动控制误差;根据所述运动控制误差和所述运动控制参数,控制所述瞄准器向所述目标对象的方向进行运动。
在一个实施例中,所述瞄准器包括云台,所述处理器602在控制所述瞄准器向所述目标对象的方向进行运动时,执行如下操作:通过所述云台控制所述瞄准器向所述目标对象的方向进行运动。
在一个实施例中,所述处理器602在控制所述瞄准器向所述目标对象的方向进行运动时,执行如下操作:通过控制所述移动机器人的移动,带动所述瞄 准器向所述目标对象的方向运动。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本发明部分实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (25)

  1. 一种瞄准控制方法,其特征在于,所述方法应用于移动机器人,所述移动机器人包括瞄准器,所述方法包括:
    获取当前时刻目标对象相对于所述移动机器人的观测方位;
    根据所述观测方位,确定所述目标对象的角速度;
    根据所述角速度确定运动控制参数,所述运动控制参数用于控制所述瞄准器向所述目标对象的方向进行运动。
  2. 如权利要求1所述的方法,其特征在于,所述瞄准器包括摄像头,所述获取当前时刻目标对象相对于所述移动机器人的观测方位,包括:
    根据所述摄像头采集到的目标图像,确定所述目标对象;
    根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的观测方位。
  3. 如权利要求2所述的方法,其特征在于,所述观测方位以极坐标形式表示,所述根据所述目标图像和所述目标对象,获取当前时刻目标对象相对于所述移动机器人的观测方位,包括:
    根据所述目标对象在所述目标图像中对应的高度和所述目标对象的实际高度,确定所述观测方位的极径;
    根据像素的角度当量、所述目标对象的中心点的横坐标以及所述目标图像的横向分辨率,确定所述观测方位的极角。
  4. 如权利要求1所述的方法,其特征在于,所述瞄准器包括摄像头和TOF传感器,所述获取当前时刻目标对象相对于所述移动机器人的观测方位,包括:
    根据所述摄像头采集到的目标图像,确定所述目标对象;
    根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第一观测方位;
    根据所述TOF传感器获得的深度图像和所述目标对象,确定当前时刻目 标对象相对于所述移动机器人的第二观测方位;
    根据所述第一观测方位和所述第二观测方位,得到当前时刻目标对象相对于所述移动机器人的观测方位。
  5. 如权利要求1所述的方法,其特征在于,所述根据所述观测方位,确定所述目标对象的角速度,包括:
    根据上一时刻所述目标对象相对于所述移动机器人的位置和速度,确定当前时刻所述目标对象相对于所述移动机器人的预测方位;
    对所述预测方位和所述观测方位进行融合滤波处理;
    根据所述融合滤波处理的结果,确定所述目标对象的角速度。
  6. 如权利要求5所述的方法,其特征在于,所述融合滤波处理包括卡尔曼滤波处理。
  7. 如权利要求5述的方法,其特征在于,所述根据所述融合滤波处理的结果,确定所述目标对象的角速度,包括:
    根据所述融合滤波处理的结果,确定所述目标对象相对于所述移动机器人的角度偏差;
    将所述角度偏差进行差分处理,得到所述目标对象的角速度。
  8. 如权利要求1所述的方法,其特征在于,所述根据所述角速度确定运动控制参数,包括:
    对所述角速度进行跟踪-微分处理,得到微分输出和跟随输出;
    根据所述微分输出和所述跟随输出,确定所述运动控制参数。
  9. 如权利要求8所述的方法,其特征在于,所述根据所述微分输出和所述跟随输出,确定所述运动控制参数,包括:
    确定所述跟踪-微分处理的微分增益和跟随增益;
    将所述微分输出与所述微分增益相乘的结果,和所述跟随输出与所述跟随 增益相乘的结果相加,得到所述运动控制参数。
  10. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    根据所述观测方位和所述移动机器人的当前方位确定运动控制误差;
    根据所述运动控制误差和所述运动控制参数,控制所述瞄准器向所述目标对象的方向进行运动。
  11. 如权利要求1所述的方法,其特征在于,所述瞄准器包括云台,所述控制所述瞄准器向所述目标对象的方向进行运动,包括:
    通过所述云台控制所述瞄准器向所述目标对象的方向进行运动。
  12. 如权利要求1或11所述的方法,其特征在于,所述控制所述瞄准器向所述目标对象的方向进行运动,包括:
    通过控制所述移动机器人的移动,带动所述瞄准器向所述目标对象的方向运动。
  13. 一种移动机器人,其特征在于,所述移动机器人包括瞄准器,所述移动机器人包括存储器和处理器:
    所述存储器,用于存储程序代码;
    所述处理器,调用所述程序代码,当所述程序代码被执行时,用于执行如下操作:
    获取当前时刻目标对象相对于所述移动机器人的观测方位;
    根据所述观测方位,确定所述目标对象的角速度;
    根据所述角速度确定运动控制参数,所述运动控制参数用于控制所述瞄准器向所述目标对象的方向进行运动。
  14. 如权利要求13所述的移动机器人,其特征在于,所述瞄准器包括摄像头,所述处理器在获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:
    根据所述摄像头采集到的目标图像,确定所述目标对象;
    根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的观测方位。
  15. 如权利要求14所述的移动机器人,其特征在于,所述观测方位以极坐标形式表示,所述处理器在根据所述目标图像和所述目标对象,获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:
    根据所述目标对象在所述目标图像中对应的高度和所述目标对象的实际高度,确定所述观测方位的极径;
    根据像素的角度当量、所述目标对象的中心点的横坐标以及所述目标图像的横向分辨率,确定所述观测方位的极角。
  16. 如权利要求13所述的移动机器人,其特征在于,所述瞄准器包括摄像头和TOF传感器,所述处理器在获取当前时刻目标对象相对于所述移动机器人的观测方位时,执行如下操作:
    根据所述摄像头采集到的目标图像,确定所述目标对象;
    根据所述目标图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第一观测方位;
    根据所述TOF传感器获得的深度图像和所述目标对象,确定当前时刻目标对象相对于所述移动机器人的第二观测方位;
    根据所述第一观测方位和所述第二观测方位,得到当前时刻目标对象相对于所述移动机器人的观测方位。
  17. 如权利要求13所述的移动机器人,其特征在于,所述处理器在根据所述观测方位,确定所述目标对象的角速度时,执行如下操作:
    根据上一时刻所述目标对象相对于所述移动机器人的位置和速度,确定当前时刻所述目标对象相对于所述移动机器人的预测方位;
    对所述预测方位和所述观测方位进行融合滤波处理;
    根据所述融合滤波处理的结果,确定所述目标对象的角速度。
  18. 如权利要求17所述的移动机器人,其特征在于,所述融合滤波处理包括卡尔曼滤波处理。
  19. 如权利要求17所述的移动机器人,其特征在于,所述处理器在根据所述融合滤波处理的结果,确定所述目标对象的角速度时,执行如下操作:
    根据所述融合滤波处理的结果,确定所述目标对象相对于所述移动机器人的角度偏差;
    将所述角度偏差进行差分处理,得到所述目标对象的角速度。
  20. 如权利要求13所述的移动机器人,其特征在于,所述处理器在根据所述角速度确定运动控制参数时,执行如下操作:
    对所述角速度进行跟踪-微分处理,得到微分输出和跟随输出;
    根据所述微分输出和所述跟随输出,确定所述运动控制参数。
  21. 如权利要求20所述的移动机器人,其特征在于,所述处理器在根据所述微分输出和所述跟随输出,确定所述运动控制参数时,执行如下操作:
    确定所述跟踪-微分处理的微分增益和跟随增益;
    将所述微分输出与所述微分增益相乘的结果,和所述跟随输出与所述跟随增益相乘的结果相加,得到所述运动控制参数。
  22. 如权利要求13所述的移动机器人,其特征在于,所述处理器,调用所述程序代码,当所述程序代码被执行时,还用于执行如下操作:
    获取所述观测方位和所述移动机器人的当前方位之间的运动控制误差;
    根据所述运动控制误差和所述运动控制参数,控制所述瞄准器向所述目标对象的方向进行运动。
  23. 如权利要求13所述的移动机器人,其特征在于,所述瞄准器包括云台,所述处理器在控制所述瞄准器向所述目标对象的方向进行运动时,执行如 下操作:
    通过所述云台控制所述瞄准器向所述目标对象的方向进行运动。
  24. 如权利要求13或23所述的移动机器人,其特征在于,所述处理器在控制所述瞄准器向所述目标对象的方向进行运动时,执行如下操作:
    通过控制所述移动机器人的移动,带动所述瞄准器向所述目标对象的方向运动。
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括的程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-12任一项所述的瞄准控制方法。
PCT/CN2019/085245 2019-04-30 2019-04-30 一种瞄准控制方法、移动机器人及计算机可读存储介质 WO2020220284A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980002956.6A CN110876275A (zh) 2019-04-30 2019-04-30 一种瞄准控制方法、移动机器人及计算机可读存储介质
PCT/CN2019/085245 WO2020220284A1 (zh) 2019-04-30 2019-04-30 一种瞄准控制方法、移动机器人及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/085245 WO2020220284A1 (zh) 2019-04-30 2019-04-30 一种瞄准控制方法、移动机器人及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020220284A1 true WO2020220284A1 (zh) 2020-11-05

Family

ID=69717609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085245 WO2020220284A1 (zh) 2019-04-30 2019-04-30 一种瞄准控制方法、移动机器人及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110876275A (zh)
WO (1) WO2020220284A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820721A (zh) * 2022-05-17 2022-07-29 苏州轻棹科技有限公司 一种卡尔曼滤波观测噪声的可视化调制方法和装置
CN116468797A (zh) * 2023-03-09 2023-07-21 北京航天众信科技有限公司 一种挂轨式机器人瞄准方法、装置及计算机设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110025A (zh) * 2021-04-08 2021-07-13 深兰科技(上海)有限公司 机器人的行进控制方法、系统、电子设备及存储介质
CN113608233A (zh) * 2021-06-30 2021-11-05 湖南宏动光电有限公司 一种基于坐标变换的虚拟瞄具实现方法及系统
CN114035186B (zh) * 2021-10-18 2022-06-28 北京航天华腾科技有限公司 一种目标方位跟踪指示系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120070891A (ko) * 2010-12-22 2012-07-02 국방과학연구소 3차원 영상 보조 항법 장치 및 이를 이용한 관성 항법 시스템
CN105518702A (zh) * 2014-11-12 2016-04-20 深圳市大疆创新科技有限公司 一种对目标物体的检测方法、检测装置以及机器人
US20160249856A1 (en) * 2015-02-27 2016-09-01 Quentin S. Miller Enhanced motion tracking using a transportable inertial sensor
CN107014378A (zh) * 2017-05-22 2017-08-04 中国科学技术大学 一种视线跟踪瞄准操控系统及方法
CN108051001A (zh) * 2017-11-30 2018-05-18 北京工商大学 一种机器人移动控制方法、系统及惯性传感控制装置
CN207456288U (zh) * 2017-11-30 2018-06-05 深圳市大疆创新科技有限公司 一种激光瞄准调节装置
CN108780321A (zh) * 2017-05-26 2018-11-09 深圳市大疆创新科技有限公司 用于设备姿态调整的方法、设备、系统和计算机可读存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425146B (zh) * 2013-08-01 2016-01-20 北京航空航天大学 一种基于角加速度的惯性稳定平台干扰观测器设计方法
CN104267743B (zh) * 2014-07-22 2017-01-11 浙江工业大学 一种采用自抗扰控制技术的船载摄像稳定平台控制方法
CN104764451A (zh) * 2015-04-23 2015-07-08 北京理工大学 一种基于惯性和地磁传感器的目标姿态跟踪方法
EP3353706A4 (en) * 2015-09-15 2019-05-08 SZ DJI Technology Co., Ltd. SYSTEM AND METHOD FOR MONITORING UNIFORM TARGET TRACKING
CN106647257B (zh) * 2016-10-14 2020-01-03 中国科学院光电技术研究所 一种基于正交最小二乘的前馈控制方法
CN106780542A (zh) * 2016-12-29 2017-05-31 北京理工大学 一种基于嵌入卡尔曼滤波器的Camshift的机器鱼跟踪方法
CN106873628B (zh) * 2017-04-12 2019-09-20 北京理工大学 一种多无人机跟踪多机动目标的协同路径规划方法
CN107993257B (zh) * 2017-12-28 2020-05-19 中国科学院西安光学精密机械研究所 一种智能imm卡尔曼滤波前馈补偿目标追踪方法及系统
CN108107738A (zh) * 2018-02-08 2018-06-01 上海机电工程研究所 变采样率非线性驱动惯性稳定跟踪控制系统及方法
CN109003292B (zh) * 2018-06-25 2022-01-18 华南理工大学 一种基于开关卡尔曼滤波器的运动目标跟踪方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120070891A (ko) * 2010-12-22 2012-07-02 국방과학연구소 3차원 영상 보조 항법 장치 및 이를 이용한 관성 항법 시스템
CN105518702A (zh) * 2014-11-12 2016-04-20 深圳市大疆创新科技有限公司 一种对目标物体的检测方法、检测装置以及机器人
US20160249856A1 (en) * 2015-02-27 2016-09-01 Quentin S. Miller Enhanced motion tracking using a transportable inertial sensor
CN107014378A (zh) * 2017-05-22 2017-08-04 中国科学技术大学 一种视线跟踪瞄准操控系统及方法
CN108780321A (zh) * 2017-05-26 2018-11-09 深圳市大疆创新科技有限公司 用于设备姿态调整的方法、设备、系统和计算机可读存储介质
CN108051001A (zh) * 2017-11-30 2018-05-18 北京工商大学 一种机器人移动控制方法、系统及惯性传感控制装置
CN207456288U (zh) * 2017-11-30 2018-06-05 深圳市大疆创新科技有限公司 一种激光瞄准调节装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820721A (zh) * 2022-05-17 2022-07-29 苏州轻棹科技有限公司 一种卡尔曼滤波观测噪声的可视化调制方法和装置
CN114820721B (zh) * 2022-05-17 2024-03-26 苏州轻棹科技有限公司 一种卡尔曼滤波观测噪声的可视化调制方法和装置
CN116468797A (zh) * 2023-03-09 2023-07-21 北京航天众信科技有限公司 一种挂轨式机器人瞄准方法、装置及计算机设备
CN116468797B (zh) * 2023-03-09 2023-11-24 北京航天众信科技有限公司 一种挂轨式机器人瞄准方法、装置及计算机设备

Also Published As

Publication number Publication date
CN110876275A (zh) 2020-03-10

Similar Documents

Publication Publication Date Title
WO2020220284A1 (zh) 一种瞄准控制方法、移动机器人及计算机可读存储介质
CN110222581B (zh) 一种基于双目相机的四旋翼无人机视觉目标跟踪方法
US10928838B2 (en) Method and device of determining position of target, tracking device and tracking system
CN109872372B (zh) 一种小型四足机器人全局视觉定位方法和系统
WO2021143286A1 (zh) 车辆定位的方法、装置、控制器、智能车和系统
US8577539B1 (en) Coded aperture aided navigation and geolocation systems
WO2020211812A1 (zh) 一种飞行器降落方法及装置
CN106873619B (zh) 一种无人机飞行路径的处理方法
CN106647257B (zh) 一种基于正交最小二乘的前馈控制方法
WO2021022580A1 (zh) 一种自动跟踪拍摄方法及系统
WO2022170847A1 (zh) 一种基于激光和视觉融合的在线标定方法
CN111666891B (zh) 用于估计障碍物运动状态的方法和装置
TWI604980B (zh) 載具控制系統及載具控制方法
US10397485B2 (en) Monitoring camera direction control
CN110913129B (zh) 基于bp神经网络的聚焦方法、装置、终端及存储装置
CN111510704B (zh) 校正摄像头错排的方法及利用其的装置
CN113551665A (zh) 一种用于运动载体的高动态运动状态感知系统及感知方法
CN110262555A (zh) 连续障碍环境下无人机实时避障控制方法
WO2021081707A1 (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
Nedevschi Online cross-calibration of camera and lidar
CN110645960A (zh) 测距方法、地形跟随测距方法、避障测距方法及装置
WO2019058582A1 (ja) 距離推定装置及び方法
CN113129373B (zh) 一种基于卷积神经网络的室内移动机器人视觉定位方法
WO2020237478A1 (zh) 一种飞行规划方法及相关设备
CN116878504B (zh) 基于多传感器融合的建筑物外墙作业无人机精准定位方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19927020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19927020

Country of ref document: EP

Kind code of ref document: A1