CN110660086B - Motion control method and system based on optical flow algorithm - Google Patents

Motion control method and system based on optical flow algorithm Download PDF

Info

Publication number
CN110660086B
CN110660086B CN201910519529.7A CN201910519529A CN110660086B CN 110660086 B CN110660086 B CN 110660086B CN 201910519529 A CN201910519529 A CN 201910519529A CN 110660086 B CN110660086 B CN 110660086B
Authority
CN
China
Prior art keywords
image
optical flow
previous frame
vxlast
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910519529.7A
Other languages
Chinese (zh)
Other versions
CN110660086A (en
Inventor
武彦
刘晨
苏佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Allwinner Technology Co Ltd
Original Assignee
Allwinner Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Allwinner Technology Co Ltd filed Critical Allwinner Technology Co Ltd
Priority to CN201910519529.7A priority Critical patent/CN110660086B/en
Publication of CN110660086A publication Critical patent/CN110660086A/en
Application granted granted Critical
Publication of CN110660086B publication Critical patent/CN110660086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention discloses a motion control method and a motion control system based on an optical flow algorithm, which do not depend on sensing devices except a camera and solve the problem of failure of the optical flow algorithm in a high-speed motion scene of an unmanned aerial vehicle, thereby reducing the product cost and improving the product performance and the competitiveness; meanwhile, the contradiction between the high hardware consumption of the optical flow algorithm and the low hardware performance of the embedded device is solved, a plurality of optimization strategies are adopted, the higher performance is obtained through the lower resource consumption, and the cost is further reduced. In addition, the motion control method of the invention also fully utilizes the characteristics of the flow images to provide more comprehensive functions such as scene recognition, intelligent early warning and the like for users.

Description

Motion control method and system based on optical flow algorithm
Technical Field
The invention relates to the field of computers, in particular to a motion control method, and particularly relates to a motion control method for fast moving equipment such as an unmanned aerial vehicle.
Background
At present, aircrafts such as unmanned aerial vehicles have been used in a large number, and for unmanned aerial vehicles needing manual control, unmanned aerial vehicles controlled by a flight control system completely and autonomously can meet the working needs of people, and the flight control system realizes positioning, speed measurement, distance measurement and real-time control of machines through transmitted data such as cameras, GPS and sensors.
Optical flow algorithms are commonly used on aircrafts such as unmanned aerial vehicles and the like, and provide functions of positioning, speed measurement, distance measurement and the like so as to serve as 'eyes' of the unmanned aerial vehicles. Specifically, the unmanned aerial vehicle acquires image data by means of a downward-mounted camera, then calculates the offset of two frames of images by adopting an optical flow algorithm to obtain the motion state of the aircraft, and feeds back the motion state to the aircraft control system, thereby completing closed-loop control of stable flight, acceleration and deceleration, hovering and the like of the unmanned aerial vehicle. In outdoor occasions, the optical flow is matched with the GPS to realize the accurate control of the position of the unmanned aerial vehicle; and local high-precision positioning can be realized even in the situation without GPS signals indoors.
The concept of optical flow was first proposed by Gibson in 1950. The method is a method for finding the corresponding relation between the previous frame and the current frame by using the change of pixels in an image sequence in a time domain and the correlation between adjacent frames so as to calculate the motion information of an object between the adjacent frames. In general, optical flow is due to movement of the foreground, motion of the camera, or both. Since the optical flow contains the motion information of the image, it can be used by the observer to determine the motion of the object.
The optical flow algorithm is based in principle on the following premises: the frame taking time of adjacent frames is continuous, or the motion between adjacent frames is compared to be 'tiny'. For the unmanned aerial vehicle, the optical flow algorithm requires a camera with a higher frame rate, and the relative speed of the aircraft and the ground cannot be too large, so as to ensure that the adjacent frames have smaller motion offset. The camera with a low frame rate and the high-speed flight or low-altitude flight of the airplane can enlarge the difference between two frames of images, so that an optical flow algorithm can not follow the motion state of the airplane easily, distorted data and even completely wrong data are output, and flight accidents are easily caused after the data enter a flight control system.
However, a camera with a high frame rate requires a stronger processor and higher hardware cost, and products such as a civil-use unmanned aerial vehicle and a toy-level unmanned aerial vehicle are very sensitive to cost, embedded hardware resources (such as a CPU and a memory) provided by the products are often very limited, and the resources are preferentially allocated to flight control software systems with higher real-time requirements, which causes that hardware resources which can be used by an optical flow software system are often caught, which also becomes one of bottlenecks that restrict performance of the products to a great extent. If better algorithm performance can be obtained through lower resource consumption, the advantages of both performance and cost can be contributed to the product.
There are two solutions to the above-mentioned drawbacks of the optical flow algorithm in the industry: firstly, various sensor data such as a GPS and an accelerometer are fused, and invalid optical flow data caused by high-speed motion is corrected. This scheme is highly reliable but adds to the hardware cost and system complexity. And secondly, the application scene of the optical flow algorithm is limited, the 'attitude mode' is entered under the condition that the optical flow is invalid due to too high flying speed or too low flying height, and a user is required to manually control the attitude of the airplane. The scheme has high requirements on the operation technology of users and is not reliable enough.
On the other hand, some improved technical solutions are also proposed in the industry for optical flow algorithms, for example, in the prior art, a four-rotor unmanned aerial vehicle flight control method based on optical flow is disclosed, which utilizes a Lucas-Karnaard method based on an image pyramid to calculate optical flow information, adopts a Kalman filtering method to process the optical flow information, fuses the optical flow and attitude angle data and calculates horizontal displacement of the unmanned aerial vehicle, and uses the optical flow and attitude angle data as feedback information of a proportional-differential controller to perform position control on a small unmanned aerial vehicle. The method has the defects that the problem of invalidation of the optical flow algorithm under high-speed motion is not solved, and the problem of high occupation of hardware resources by the optical flow algorithm is also not solved.
The prior art also discloses an image processing method based on an optical flow algorithm, which solves the problems of large calculation amount and low precision in a video stabilization processing method, wherein after a current frame image in a video sequence to be processed is obtained, the positions of a group of pixel points in the current frame image in a next frame image are obtained according to the optical flow algorithm, the group of pixel points are sampled according to a global parameter model, pixel points with the moving distance within a preset range are screened out, global motion parameters from the current frame image to the next frame image are obtained according to the position coordinates of the pixel points, and then the next frame image is compensated according to the result of smoothing the pixel points. The method is an improvement aiming at a video stabilization technology, compensates and corrects the video so as to improve the stability of the video, does not relate to motion control according to a motion image although an optical flow algorithm is adopted, and is not suitable for high-speed flight motion control of products such as unmanned aerial vehicles.
The prior art also discloses a method for controlling displacement by aiming at a moving image, which solves the defect that the traditional optical flow algorithm cannot process images with large offset, improves the core part of a pyramid model of the traditional optical flow algorithm, extracts feature vectors and constructs energy functions at each layer of the pyramid, iteratively solves a motion displacement field, and finally obtains a corrected registration image. The method is suitable for solving the problems of low precision and over-smoothness in the scene of non-rigid image and large displacement deformation, but has larger requirements on hardware resources.
According to the existing methods in the industry, for the problem that the motion control method based on the optical flow algorithm is frequently invalid in the scene of high-speed flight of the unmanned aerial vehicle in the prior art, the GPS, the accelerometer, the ultrasonic sensor and the like are usually adopted as correction measures to make up for the defect of the optical flow algorithm; however, the above-mentioned sensor devices are dependent on the cost of the product. On the other hand, the method of using high performance processor and memory to improve the processing capability of the optical flow algorithm also makes the product cost rise greatly. In addition, the traditional motion control method based on the optical flow algorithm occupies more hardware resources such as a CPU (central processing unit), a memory and the like, and brings difficulty to the operation of the traditional motion control method on embedded equipment such as an unmanned aerial vehicle and the like. In addition, the traditional motion control method based on the optical flow algorithm does not have the functions of scene recognition, intelligent early warning and the like, so that the motion control method is limited to be not widely applied to a great extent.
Disclosure of Invention
The invention discloses a motion control method based on an optical flow algorithm, and aims to solve the problem of failure of the optical flow algorithm in an unmanned aerial vehicle high-speed motion scene without depending on sensing devices except a camera, thereby reducing the product cost and improving the product performance and competitiveness; another purpose of the invention is to solve the contradiction between the high hardware consumption of the optical flow algorithm and the low hardware performance of the embedded device, that is, to adopt various optimization strategies, exchange lower resource consumption for higher performance, and further reduce the cost. In addition, the motion control method of the invention also fully utilizes the characteristics of the flow images to provide more comprehensive functions such as scene recognition, intelligent early warning and the like for users.
The motion control method based on the optical flow algorithm can be used for motion control of the unmanned aerial vehicle.
The invention relates to a motion control method based on an optical flow algorithm, which mainly comprises the following processes: after the unmanned aerial vehicle collects an image by means of a downward camera, preprocessing such as graying, intercepting, zooming and the like is firstly carried out on the image to obtain an image with smaller size and more reserved characteristics, and the image is used as the input of an optical flow algorithm. And then, according to the optical flow data obtained by the previous operation, performing corresponding translation and cutting on the input image to obtain an image after registration. Then, establishing an L-K pyramid for the image; searching pixel points which are most similar to the feature points of the previous frame from the top layer of the pyramid, iterating towards the lower layer of the pyramid by taking the pixel points as initial values, and solving an offset feature matrix between two frames of images; and then carrying out post-processing such as screening, mean value, filtering and the like on the matrix to obtain the offset between the two frames of images. The offset is used as an output quantity of an optical flow algorithm and fed back to a flight control program in a message queue mode and the like, and is also used as an initial value of the next optical flow image registration.
In addition, the motion control method of the invention also fully utilizes the algorithm intermediate product, and provides more comprehensive functions of data confidence, texture quality, scene recognition, intelligent early warning and the like for the user.
The motion control method based on the optical flow algorithm comprises the following steps:
step 1: image acquisition
Reading a frame of image from a camera, converting the frame of image into a gray level image, and recording the gray level image as ImgGrey;
step 2: image pre-processing
Step 21: intercepting a sub-image with the size of ClippedSizeX x ClippedSizeY from the image center of the gray scale image, and recording the sub-image as ClippedImg; wherein ClippedSizeX is the first side length of the subgraph, and ClippedSizeY is the second side length of the subgraph;
step 22: the sub-graph ClippedImmg is zoomed and binned to be a small graph with the size ResizedSizeX x ResizedSizeY and is marked as ResizedImg; wherein binding is a zoom factor, resize sizex is a first side length of the thumbnail, and resize sizey is a second side length of the thumbnail;
and step 3: frame skip determination
Step 31: calculating the accuracy of the latest N times of optical flow operation; if the accuracy is higher than the first threshold value and the previous frame is not skipped, skipping the current frame and continuing to use the previous data; otherwise, the frame is not skipped, and the following operation is continued, wherein N is an integer;
in a particular embodiment, N is preferably from 3 to 10, preferably from 5 to 7.
In one embodiment, the first threshold is preferably 10-30%; more preferably 15-20%.
And 4, step 4: motion compensation
Step 41: cropping the minimap resize img into a cropped image of size (resize sizex- | VxLast |) x (resize sizey- | vyl last |) according to the last effective optical flow velocity (VxLast, vyl), which is denoted as shifted img, and representing the rectangular range of the cropped image shifted img by (left1, right1, top1, bottom 1); left1, right1, top1 and bottom1 respectively represent the coordinates of the left, right, top and bottom sides of the clipped image shiftedmimg; in a preferred embodiment, the cropping method for cropping the minimap resize img to obtain the cropped image shifted img comprises the following steps:
if the speed VxLast in the direction of the last effective optical flow speed X is greater than 0, the left coordinate left1 of the cut image is made to be 0, and the right coordinate right1 of the cut image is made to be ResizedSizeX- | VxLast |; otherwise, the left coordinate left1 of the cropping image is made to be | VxLast |, and the right coordinate right1 of the cropping image is made to be resize sizex;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, the top edge coordinate top1 of the cut image is made to be 0, and the bottom edge coordinate bottom1 of the cut image is resized SizeY- | VyLast |; otherwise, let top side coordinate top1 of the cropped image be | vyl last |, and bottom side coordinate bottom1 of the cropped image be ResizedSizeY.
Step 42: cutting an image of an adjacent previous frame minimap resized imgl, and obtaining a previous frame cut image with the size of (resized sizex- | VxLast |) x (resized sizey- | vyl ast |), and recording the previous frame cut image as shifted imgl; a rectangular range of the previous frame cropping image shiftedmingllast is represented by (left2, right2, top2, bottom 2); left2, right2, top2 and bottom2 respectively represent the coordinates of the left edge, the right edge, the top edge and the bottom edge of the previous frame of the clipped image shiftedImgLast;
in a preferred embodiment, the cropping method for cropping the resized imgl ast of the previous frame to obtain the shifted imgl ast of the previous frame comprises the following steps:
if the speed VxLast in the last effective optical flow speed X direction is greater than 0, the left coordinate left2 of the previous frame of the cut image is made VxLast, and the right coordinate right2 of the previous frame of the cut image is made ResizedSizeX; otherwise, the left coordinate left2 of the previous frame of the cropping image is made to be 0, and the right coordinate right2 of the previous frame of the cropping image is made to be resize sizex- | VxLast |;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, enabling the top side coordinate top2 of the previous frame of the cut image to be VyLast and the bottom side coordinate bottom2 of the previous frame of the cut image to be ResizedSizeY; otherwise, the top side coordinate top2 of the previous frame of the cropping image is made to be 0, and the bottom side coordinate bottom2 of the previous frame of the cropping image is made to be resized sizey- | vyl |.
Step 43: sending the cut image ShiftedImg and the previous cut image ShiftedImgLast into an optical flow pyramid;
wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |;
and 5: calculating optical flow
Calculating a cutting image ShiftedImg and a previous frame cutting image ShiftedImgLast to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image;
in one embodiment, step 5 comprises the following sub-steps:
step 51: applying an improved Harris point selection algorithm to a previous frame of cut image ShiftedImgLast, and judging whether a specified area of the cut image is an angular point, an edge or a smooth area; and simultaneously, evaluating texture quality, fuzzy degree and brightness information according to the density degree of the corner points.
In this step, the input parameters of the Harris point selection algorithm include the minimum distance (in pixels) between a point and the point, and the number range of points to be generated; the minimum distance between a point and a point is 2-10 pixels, and the number of points ranges from 10-50.
Step 52: forward tracking: respectively establishing a pyramid Pymd and a PymdLast for a cut image ShiftedImg and a previous cut image ShiftedImgLast by taking the L-K pyramid as a prototype; searching a pixel point which is most similar to the first characteristic point at the same layer of the pyramid PymdLast by taking the first characteristic point at the top layer of the pyramid Pymd as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel point as an initial value, and solving a first offset characteristic matrix MatrixForward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of one characteristic point;
step 53: and (3) back tracking: respectively establishing a pyramid PymdLast and a pyramid Pymd for a previous frame of a cut image ShiftedImgLast and a cut image ShiftedImg by taking an L-K pyramid as a prototype; and (3) searching a pixel point which is most similar to the second feature point at the same layer of the pyramid Pymd last by taking the second feature point at the top layer of the pyramid Pymd as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel point as an initial value, and solving another offset feature matrix Matrixback ward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of one feature point. In the step, the sequence of the cutting image ShiftedImg and the cutting image ShiftedImgLast of the previous frame is changed, the pyramid is reestablished in the previous step, and a second offset feature matrix MatrixBackward is calculated;
step 54: comparing two offset matrixes MatrixForward and MatrixBackward obtained in the forward tracking step and the backward tracking step, and eliminating elements with overlarge difference values to obtain an effective characteristic matrix MatrixValid; meanwhile, according to the number of the elements to be removed, obtaining Confidence; wherein, the confidence coefficient is 1- (the number of the elements removed in the current Matrix/the total number of the elements);
in one embodiment, the elimination method comprises the following steps: subtracting the matrix MatrixForward and the matrix Backward of the two matrixes, and calculating the absolute value of the result to obtain a new matrix MatrixDiff; traversing each element of the new matrix Matrix diff, and if the value of a certain element is greater than a threshold value round _ trip _ diff _ thresh, rejecting the point;
in a preferred embodiment, the round trip diff thresh is preferably in the range 0-1, more preferably 0.1-0.5, in pixels.
Step 55: averaging all elements in the effective feature matrix to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image;
step 6: data post-processing
Performing amplitude limiting and filtering processing on the first vector (VxRaw, VyRaw), and performing motion compensation and scaling to obtain an optical flow velocity (VxUnbinding, VyUnbinding);
in one embodiment, step 6 comprises the following sub-steps:
step 61: and (4) carrying out amplitude limiting and directional filtering on the first vector (VxRaw, VyRaw), and removing unreasonable values to obtain a second vector (VxLmt, VyLmt). When the unmanned aerial vehicle is controlled, the vector (VxLmt, VyRaw) can be subjected to amplitude limiting and direction filtering based on the characteristics that the upper and lower limits of the flight speed, the size of the flight speed and the direction of the unmanned aerial vehicle cannot be suddenly changed, and unreasonable values are removed to obtain the vector (VxLmt, VyLmt).
In this step, the method of performing slice processing on the first vector (VxRaw, VyRaw) includes:
if | VxRaw | is greater than the threshold value delta _ X _ max, or | VyRaw | is greater than the threshold value delta _ Y _ max, this point is discarded as invalid data. Wherein the thresholds delta _ X _ max and delta _ Y _ max are scaled according to the image scaling according to the maximum flying speed of the aircraft, the ranges of delta _ X _ max and delta _ Y _ max being 1-5, preferably 2-4.
Step 62: and sending the second vector (VxLmt, VyLmt) into a median filter and a mean filter in sequence to obtain a smoothed third vector (VxFlt, VyFlt).
And step 63: the third vector (VxFlt, VyFlt) is added with the offset of motion compensation (VxLast, VyLast) and restored to the fourth vector (VyUnshifted ) before motion compensation.
Step 64: multiplying the fourth vector (VyUnshifted) by the zoom factor binding, and returning to the fifth vector (vyunbining) before zooming, that is, the optical flow velocity (vyunbining) of the motion device; the optical flow velocity (vyunbining ) is sent to a flight control program.
In addition, the method can also comprise the following steps:
and 7: image evaluation and anomaly warning
Obtaining flight state information and scene information according to the image texture quality, the fuzzy degree and the brightness information obtained in the step 51 and by combining the Confidence obtained in the step 52; obtaining abnormal early warning information when the algorithm is invalid;
and 8: data output
The optical flow velocity (vyunbining ) obtained in the above step is sent to the flight control program.
And sending the obtained optical flow velocity (VvUnbinding, VyUnbinding), Confidence coefficient, flight state, scene information and abnormality early warning information to a flight control program.
Drawings
FIG. 1 is a flow chart of the method for controlling motion based on optical flow algorithm of the present invention.
Fig. 2 is a diagram showing experimental comparison between the motion control method of the present invention and the motion control method of the prior art.
Detailed Description
In order to describe the technical solutions of the present invention in more detail to facilitate further understanding of the present invention, the following describes specific embodiments of the present invention with reference to the accompanying drawings. It should be understood, however, that all of the illustrative embodiments and descriptions thereof are intended to illustrate the invention and are not to be construed as the only limitations of the invention.
The motion control method based on the optical flow algorithm can be used for motion control of the unmanned aerial vehicle, such as the unmanned aerial vehicle carrying the full-log MR100 chip.
As shown in FIG. 1, the method for controlling motion based on optical flow algorithm of the present invention comprises the following steps:
step 1: image acquisition
One frame of image is read from the camera and converted into a gray scale image, which is denoted as imgcry. For example, for a device that outputs YUV data, only the Y component may be taken.
Step 2: image pre-processing
Step 21: intercepting a sub-image with the size of ClippedSizeX x ClippedSizeY from the image center of the gray scale image, and recording the sub-image as ClippedImg; wherein ClippedSizeX is the first side length of the subgraph, and ClippedSizeY is the second side length of the subgraph. The main purpose of intercepting the ClippedImg in the step is to reduce subsequent operand and remove invalid information of the ImgGrey edge of the gray level image.
Step 22: the sub-graph ClippedImmg is zoomed and binned times and changed into a small graph with the size ResizedSizeX xResizedSizeY, and the small graph is marked as ResizedImg; where binding is the zoom factor, resize sizex is the first side length of the panel, and resize sizey is the second side length of the panel. This step further compresses the size of the grayscale image picture, with the aim of further reducing the amount of subsequent operations.
And step 3: frame skip determination
Step 31: calculating the accuracy of the latest N times of optical flow operation; if the accuracy is higher than the first threshold value and the previous frame is not skipped, skipping the current frame and continuing to use the previous data; otherwise, the frame is not skipped, and the following operation is continued, wherein N is an integer.
In one embodiment, N is preferably from 3 to 10, more preferably from 5 to 7.
In one embodiment, the first threshold is preferably 10-30%; more preferably 15-20%.
And 4, step 4: motion compensation
Step 41: cropping the minimap resize img into a cropped image of size (resize sizex- | VxLast |) x (resize sizey- | vyl last |) according to the last effective optical flow velocity (VxLast, vyl), which is denoted as shifted img, and representing the rectangular range of the cropped image shifted img by (left1, right1, top1, bottom 1); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left1, right1, top1 and bottom1 respectively represent the coordinates of the left, right, top and bottom sides of the clipped image shiftedmimg;
the cutting method for cutting the minimap resisedimg to obtain the cut image shiftedlimg comprises the following steps:
if the speed VxLast in the direction of the last effective optical flow speed X is greater than 0, the left coordinate left1 of the cut image is made to be 0, and the right coordinate right1 of the cut image is made to be ResizedSizeX- | VxLast |; otherwise, the left coordinate left1 of the cropping image is made to be | VxLast |, and the right coordinate right1 of the cropping image is made to be resize sizex;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, the top edge coordinate top1 of the cut image is made to be 0, and the bottom edge coordinate bottom1 of the cut image is resized SizeY- | VyLast |; otherwise, let top side coordinate top1 of the cropped image be | vyl last |, and bottom side coordinate bottom1 of the cropped image be ResizedSizeY.
In a specific embodiment, the clipping rule may be represented by the following computer code:
Figure BDA0002096203430000091
Figure BDA0002096203430000101
step 42: cutting an image of an adjacent previous frame minimap resized imgl, and obtaining a previous frame cut image with the size of (resized sizex- | VxLast |) x (resized sizey- | vyl ast |), and recording the previous frame cut image as shifted imgl; a rectangular range of the previous frame cropping image shiftedmingllast is represented by (left2, right2, top2, bottom 2); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left2, right2, top2 and bottom2 respectively represent the coordinates of the left edge, the right edge, the top edge and the bottom edge of the previous frame of the clipped image shiftedImgLast;
the clipping method for clipping the previous frame thumbnail to obtain the previous frame clipped image shiftedImgLast comprises the following steps:
if the speed VxLast in the last effective optical flow speed X direction is greater than 0, the left coordinate left2 of the previous frame of the cut image is made VxLast, and the right coordinate right2 of the previous frame of the cut image is made ResizedSizeX; otherwise, the left coordinate left2 of the previous frame of the cropping image is made to be 0, and the right coordinate right2 of the previous frame of the cropping image is made to be resize sizex- | VxLast |;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, enabling the top side coordinate top2 of the previous frame of the cut image to be VyLast and the bottom side coordinate bottom2 of the previous frame of the cut image to be ResizedSizeY; otherwise, the top side coordinate top2 of the previous frame of the cropping image is made to be 0, and the bottom side coordinate bottom2 of the previous frame of the cropping image is made to be resized sizey- | vyl |.
In a specific embodiment, the clipping rule may be represented by the following computer code:
Figure BDA0002096203430000102
Figure BDA0002096203430000111
step 43: the cropped image shiftedmimg and the previous cropped image shiftedmimglast are fed into the optical flow pyramid together.
According to the motion control method based on the optical flow algorithm, after the frame skipping judgment is used for predicting the speed and the motion compensation algorithm is introduced in the steps, the defect that the traditional optical flow algorithm is only suitable for low-speed motion and slowly-changed scenes is overcome, and the applicable scenes of the optical flow algorithm are greatly expanded. Under the scene of low frame rate of a camera or high-speed motion of an unmanned aerial vehicle, the motion control method based on the algorithm has good performance, can replace lower resource consumption with higher performance, reduces product cost and improves product performance and competitiveness.
And 5: calculating optical flow
Step 51: applying an improved Harris point selection algorithm to a previous frame of cut image ShiftedImgLast, and judging whether a specified area of the cut image is an angular point, an edge or a smooth area; meanwhile, according to the density degree of the angular points, the texture quality, the fuzzy degree and the like are evaluated, and meanwhile, the brightness information is also obtained.
In this step, the input parameters of the Harris algorithm include the minimum distance between a point and the point (in pixels), and the range of the number of points to be generated.
In a preferred embodiment, the minimum distance between a point and a point is 2-10 pixels, and the number of points ranges from 10-50.
The point selection function is the key point of the step, and information such as texture quality, fuzzy degree, brightness information and the like is used as auxiliary information for carrying out auxiliary processing on the optical flow result.
Step 52: forward tracking: respectively establishing a pyramid Pymd and a PymdLast for a cut image ShiftedImg and a previous cut image ShiftedImgLast by taking the L-K pyramid as a prototype; searching pixel points which are most similar to the first characteristic point respectively on the same layer of the pyramid PymdLast by taking the first characteristic point of the pyramid Pymd top layer as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel points as initial values, and solving a first offset characteristic matrix MatrixForward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of the characteristic point;
step 53: and (3) back tracking: respectively establishing a pyramid PymdLast and a pyramid Pymd for a previous frame of a cut image ShiftedImgLast and a cut image ShiftedImg by taking an L-K pyramid as a prototype; and (3) searching pixel points which are most similar to the second characteristic points respectively at the same layer of the pyramid Pymd last by taking the second characteristic points at the top layer of the pyramid Pymd as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel points as initial values, and solving another offset characteristic matrix MatrixBackward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of one characteristic point. In this step, the order of the cropped image shiftedlimg and the cropped image shiftedlimgl of the previous frame is changed, the pyramid is reestablished in the previous step, and the second offset feature matrix is calculated.
Step 54: comparing two offset matrixes MatrixForward and MatrixBackward obtained in the forward tracking step and the backward tracking step, and eliminating elements with overlarge difference values to obtain an effective characteristic matrix MatrixValid; and obtaining the Confidence coefficient according to the number of the removed elements.
In general, the difference between the forward tracking and the backward tracking is usually very small for a feature point. If the difference value is large, the tracking result of the feature point is wrong, and the feature point is rejected.
The specific method for removing comprises the following steps: subtracting the matrix MatrixForward and the matrix Backward of the two matrixes, and calculating the absolute value of the result to obtain a new matrix MatrixDiff; each element of the new matrix is then traversed and if the value of an element is greater than the threshold round trip diff thresh, this point is culled. Wherein the round _ trip _ diff _ thresh preferably ranges from 0 to 1, more preferably from 0.1 to 0.5, in units of pixels.
In one embodiment, the specific steps of the culling method can be expressed in terms of pseudo-code as:
Figure BDA0002096203430000131
wherein, the confidence coefficient is 1- (the number of elements removed in the current Matrix/the total number of elements).
Step 55: and averaging all elements in the effective feature matrix to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image.
In the steps, a dynamic load balancing strategy is adopted, different frame intervals are used for performing optical flow operation in a low-speed and high-speed motion scene, the operation amount is reduced on the premise of considering the algorithm effect, the CPU consumption is reduced in a multiplied mode, and the real-time performance of the unmanned aerial vehicle system is met. The characteristics of low consumption and low delay make the control method of the invention particularly suitable for embedded platforms with limited hardware resources.
In addition, in the method, a round-trip tracking method is used to improve the accuracy of the algorithm and eliminate the influence of interference factors such as noise points. And adopting texture evaluation and dynamic point selection strategies to accurately and efficiently select the image feature points for optical flow operation.
Step 6: data post-processing
Step 61: and (4) carrying out amplitude limiting and directional filtering on the first vector (VxRaw, VyRaw), and removing unreasonable values to obtain a second vector (VxLmt, VyLmt). When the unmanned aerial vehicle is controlled, the vector (VxLmt, VyRaw) can be subjected to amplitude limiting and direction filtering based on the characteristics that the upper and lower limits of the flight speed, the size of the flight speed and the direction of the unmanned aerial vehicle cannot be suddenly changed, and unreasonable values are removed to obtain the vector (VxLmt, VyLmt).
In this step, the method of performing slice processing on the vector (VxRaw, VyRaw) includes:
if | VxRaw | is greater than the threshold value delta _ X _ max, or | VyRaw | is greater than the threshold value delta _ Y _ max, this point is discarded as invalid data. Wherein the thresholds delta _ X _ max and delta _ Y _ max are scaled according to the image scaling according to the maximum flying speed of the aircraft, the ranges of delta _ X _ max and delta _ Y _ max being 1-5, preferably 2-4.
In this step, the directional filtering includes median filtering to make the data curve smoother. Preferably, the filtering length is selected to be 2, and too large filtering length can cause the phase lag of the curve and influence the real-time performance of the algorithm output.
Step 62: and sending the second vector (VxLmt, VyLmt) into a median filter and a mean filter in sequence to obtain a smoothed third vector (VxFlt, VyFlt).
And step 63: adding the offset (VxLast, VyLast) of the motion compensation to the third vector (VxFlt, VyFlt), and restoring to a fourth vector (VxUnshifted, VyUnshifted) before the motion compensation; the fourth vector is also used as the motion compensation offset (VxLast ', VyLast') in step 41 for the next round of operation.
Step 64: multiplying the fourth vector (VyUnshifted) by the zoom factor binding returns to the fifth vector (vyunbining ) before zooming, i.e. the optical flow velocity (vyunbining ) of the motion device.
As can be seen from the above steps, the control method of the invention has a perfect data post-processing mechanism: on one hand, evaluating the multi-dimensional information such as texture, fuzzy degree, brightness, algorithm confidence coefficient and the like of the image, and analyzing and judging the optical flow operation result; on the other hand, based on the characteristic that the flight speed of the unmanned aerial vehicle cannot suddenly change, the invalid numerical value is filtered and the distortion numerical value is recovered to the greatest extent by means of directional filtering, sudden change suppression and the like.
And 7: image evaluation and anomaly warning
And (4) according to the image texture quality, the fuzzy degree, the brightness information and the like obtained in the step 51 and the Confidence obtained in the step 52, carrying out fusion processing on the information, and estimating flight state and scene information, such as takeoff, landing, low-altitude flight, high-altitude flight, night flight and the like. And obtaining abnormal early warning information under the condition that the algorithm is invalid, such as insufficient image texture or overspeed flight.
And 8: data output
The optical flow velocity (vyunbining ) obtained in the above step, the Confidence, the image quality, the abnormal state, and the like are collectively packaged and transmitted to the flight control program.
The method can carry out intelligent early warning at the same time, and can provide early warning information for users under the condition that the algorithm is invalid, such as insufficient image texture or overspeed flight, when multi-dimensional information such as image texture information, fuzzy degree, brightness, algorithm confidence coefficient and the like is fused, so that the flight safety is improved.
The invention also discloses a motion control system based on the optical flow algorithm, which comprises the following components:
an image acquisition module:
the image acquisition module is used for reading a frame of image from the camera, converting the frame of image into a gray level image and recording the gray level image as ImgGrey;
an image preprocessing module:
the image preprocessing module is used for intercepting a sub-image with the size of ClippedSizeX x ClippedSizeY from the center of the image of the gray scale image and recording the sub-image as ClippedImg; wherein ClippedSizeX is the first side length of the subgraph, and ClippedSizeY is the second side length of the subgraph; the sub-graph ClippedImmg is zoomed and binned to be a small graph with the size ResizedSizeX x ResizedSizeY and is marked as ResizedImg; wherein binding is a zoom factor, resize sizex is a first side length of the thumbnail, and resize sizey is a second side length of the thumbnail;
a frame skipping judgment module:
the frame skipping judgment module is used for calculating the accuracy of the latest N times of optical flow operation; if the accuracy is higher than the first threshold value and the previous frame is not skipped, skipping the current frame and continuing to use the previous data; otherwise, the frame is not skipped, and the following operation is continued, wherein N is an integer;
a motion compensation module:
motion compensation module for
Cropping the minimap resize img into a cropped image of size (resize sizex- | VxLast |) x (resize sizey- | vyl last |) according to the last effective optical flow velocity (VxLast, vyl), which is denoted as shifted img, and representing the rectangular range of the cropped image shifted img by (left1, right1, top1, bottom 1); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left1, right1, top1 and bottom1 respectively represent the coordinates of the left, right, top and bottom sides of the clipped image shiftedmimg;
cutting an image of an adjacent previous frame minimap resized imgl, and obtaining a previous frame cut image with the size of (resized sizex- | VxLast |) x (resized sizey- | vyl ast |), and recording the previous frame cut image as shifted imgl; a rectangular range of the previous frame cropping image shiftedmingllast is represented by (left2, right2, top2, bottom 2); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left2, right2, top2 and bottom2 respectively represent the coordinates of the left edge, the right edge, the top edge and the bottom edge of the previous frame of the clipped image shiftedImgLast;
sending the cut image ShiftedImg and the previous cut image ShiftedImgLast into an optical flow pyramid;
a calculate optical flow module:
the optical flow calculating module is used for calculating the clipped image ShiftedImg and the clipped image ShiftedImgLast of the previous frame to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image;
a data post-processing module:
the data post-processing module is used for carrying out amplitude limiting and filtering processing on the first vector (VxRaw, VyRaw), carrying out motion compensation and scaling, and obtaining an optical flow velocity (VxUnbinding, VyUnbinding);
a sending module:
the transmitting module is used for transmitting the optical flow velocity (VvUnbinding) to a flight control program.
The system further comprises:
and the image evaluation and abnormity warning module is used for carrying out fusion processing on the image texture quality, the fuzzy degree, the brightness information and the like obtained by the module in combination with the Confidence coefficient, and estimating flight state and scene information, such as takeoff, landing, low-altitude flight, high-altitude flight, night flight or the like. And obtaining abnormal early warning information under the condition that the algorithm is invalid, such as insufficient image texture or overspeed flight.
A data output module: the module is used for packaging and sending the optical flow velocity (VvUnbinding, VyUnbinding), the Confidence coefficient, the image quality, the abnormal state and the like to the flight control system.
Although the effect of the present invention can be approximately achieved by increasing the frame rate and resolution of the camera or video source, or increasing the operation performance of the CPU (gpu), the present invention is directed to embedded electronic products with limited resources, such as an unmanned aerial vehicle, which generally do not have a high frame rate and high resolution video source or a high performance CPU (gpu), or which have a high frame rate and high resolution video source or a high performance CPU, and therefore the cost thereof is greatly increased, and the present invention does not meet the market demand. In addition, although the camera can be combined with other sensors such as ultrasonic sensors, accelerometers, gyroscopes and the like, the similar purpose of the invention can be achieved; but the invention only uses the camera as a data source, and has simpler integral structure and lower cost.
The motion control device of the invention has excellent results after being tested and verified for many times.
The following describes in detail one of the tests performed using a kudrone series drone developed by the global science and technology limited carrying the motion control device of the present invention.
And (3) testing conditions are as follows: the camera resolution is 640x480, the frame rate is 50FPS, and the CPU is a global system MR100(Cortex-A7, single core, dominant frequency 1 GHz). Flying outdoors, the height is about 1.5 meters.
In which the motion control method of the present invention and the motion control method of the conventional algorithm are simultaneously operated, two curves as shown in fig. 2 are obtained. The abscissa in fig. 2 represents the frame number; the ordinate represents a component of the horizontal travel speed of the drone after orthogonal decomposition, i.e. the speed at which the drone flies horizontally forward.
In fig. 2, the dotted line is experimental data of the conventional optical flow algorithm before improvement, which is labeled as OLD, and the solid line is experimental data of the motion control method according to the present invention, which is labeled as NEW. In the figure, when the dotted line overlaps the solid line, only the solid line is visible.
Comparing the two curves shows that: when flying at low speed, the output of the two control methods is basically consistent, and the two curves are superposed; when flying at a high speed, the curve corresponding to the control method of the invention is very stable and smooth, and can correctly reflect the flying state of the airplane, while the curve corresponding to the traditional algorithm has serious peak, jitter and distortion. Therefore, under the condition that the unmanned aerial vehicle flies quickly, the traditional optical flow algorithm is invalid, and the motion control method can well track the flying state.
Those skilled in the art can understand that the motion control method of the present invention can be used in vehicles, cameras, video surveillance, robots, and other applications and machine vision products that need to acquire the motion state of an object, besides the unmanned aerial vehicle described in the above embodiments.
It should be understood that similar technical effects can be provided by simply changing the implementation steps of the motion control method, adjusting the algorithm parameters, adding, reducing or replacing some steps, and therefore, the protection scope of the present invention is defined by the following claims.

Claims (20)

1. A motion control method based on an optical flow algorithm is characterized by comprising the following steps:
step 1: image acquisition
Reading a frame of image from a camera, converting the frame of image into a gray level image, and recording the gray level image as ImgGrey;
step 2: image pre-processing
Step 21: intercepting a sub-image with the size of ClippedSizeX x ClippedSizeY from the image center of the gray scale image, and recording the sub-image as ClippedImg; wherein ClippedSizeX is the first side length of the subgraph, and ClippedSizeY is the second side length of the subgraph;
step 22: the sub-graph ClippedImmg is zoomed and binned to be a small graph with the size ResizedSizeX x ResizedSizeY and is marked as ResizedImg; wherein binding is a zoom factor, resize sizex is a first side length of the thumbnail, and resize sizey is a second side length of the thumbnail;
and step 3: frame skip determination
Step 31: calculating the accuracy of the latest N times of optical flow operation; if the accuracy is higher than the first threshold value and the previous frame is not skipped, skipping the current frame and continuing to use the previous data; otherwise, the frame is not skipped, and the following operation is continued, wherein N is an integer;
and 4, step 4: motion compensation
Step 41: cropping the minimap resize img into a cropped image of size (resize sizex- | VxLast |) x (resize sizey- | vyl last |) according to the last effective optical flow velocity (VxLast, vyl), which is denoted as shifted img, and representing the rectangular range of the cropped image shifted img by (left1, right1, top1, bottom 1); left1, right1, top1 and bottom1 respectively represent the coordinates of the left, right, top and bottom sides of the clipped image shiftedmimg;
step 42: cutting an image of an adjacent previous frame minimap resized imgl, and obtaining a previous frame cut image with the size of (resized sizex- | VxLast |) x (resized sizey- | vyl ast |), and recording the previous frame cut image as shifted imgl; a rectangular range of the previous frame cropping image shiftedmingllast is represented by (left2, right2, top2, bottom 2); left2, right2, top2 and bottom2 respectively represent the coordinates of the left edge, the right edge, the top edge and the bottom edge of the previous frame of the clipped image shiftedImgLast;
step 43: sending the cut image ShiftedImg and the previous cut image ShiftedImgLast into an optical flow pyramid;
wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |;
and 5: calculating optical flow
Calculating a cutting image ShiftedImg and a previous frame cutting image ShiftedImgLast to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image;
step 6: data post-processing
Performing amplitude limiting and filtering processing on the first vector (VxRaw, VyRaw), and performing motion compensation and scaling to obtain an optical flow velocity (VxUnbinding, VyUnbinding);
the optical flow velocity (vyunbining ) is transmitted to the flight control system.
2. The method of claim 1,
in step 3, N is 3-10, and the first threshold is 10-30%.
3. The method of claim 1,
in step 3, N is 5-7, and the first threshold is 15-20%.
4. The method of claim 1,
in step 41, the cropping method for cropping the minimap resixed img to obtain the cropped image shiftedming comprises the following steps:
if the speed VxLast in the direction of the last effective optical flow speed X is greater than 0, the left coordinate left1 of the cut image is made to be 0, and the right coordinate right1 of the cut image is made to be ResizedSizeX- | VxLast |; otherwise, the left coordinate left1 of the cropping image is made to be | VxLast |, and the right coordinate right1 of the cropping image is made to be resize sizex;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, the top edge coordinate top1 of the cut image is made to be 0, and the bottom edge coordinate bottom1 of the cut image is resized SizeY- | VyLast |; otherwise, let top side coordinate top1 of the cropped image be | vyl last |, and bottom side coordinate bottom1 of the cropped image be ResizedSizeY.
5. The method of claim 1,
in step 42, the clipping method for clipping the resized imgl ast of the previous frame to obtain the shifted imgl ast of the previous frame includes:
if the speed VxLast in the last effective optical flow speed X direction is greater than 0, the left coordinate left2 of the previous frame of the cut image is made VxLast, and the right coordinate right2 of the previous frame of the cut image is made ResizedSizeX; otherwise, the left coordinate left2 of the previous frame of the cropping image is made to be 0, and the right coordinate right2 of the previous frame of the cropping image is made to be resize sizex- | VxLast |;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, enabling the top side coordinate top2 of the previous frame of the cut image to be VyLast and the bottom side coordinate bottom2 of the previous frame of the cut image to be ResizedSizeY; otherwise, the top side coordinate top2 of the previous frame of the cropping image is made to be 0, and the bottom side coordinate bottom2 of the previous frame of the cropping image is made to be resized sizey- | vyl |.
6. The method according to any of claims 1-5, wherein step 5 comprises the sub-steps of:
step 51: applying an improved Harris point selection algorithm to a previous frame of cut image ShiftedImgLast, and judging whether a specified area of the cut image is an angular point, an edge or a smooth area; meanwhile, texture quality, fuzzy degree and brightness information are obtained according to the density degree of the angular points;
step 52: forward tracking to obtain a first offset characteristic matrix forward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of a characteristic point;
step 53: performing backward tracking to obtain a second offset feature matrix between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of a feature point;
step 54: comparing two offset matrixes MatrixForward and MatrixBackward obtained in the forward tracking step and the backward tracking step, and eliminating elements with overlarge difference values to obtain an effective characteristic matrix MatrixValid; meanwhile, according to the number of the elements to be removed, obtaining Confidence; wherein, the confidence coefficient is 1- (the number of the elements removed from the current matrix/the total number of the elements);
step 55: and averaging all elements in the effective feature matrix to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image.
7. The method of claim 6,
in step 51, the input parameters of the Harris point selection algorithm include the minimum distance between a point and a point, and the number range of points to be generated, wherein the minimum distance between a point and a point is 2-10 pixels, and the number range of points is 10-50.
8. The method of claim 6,
in step 52, the method of forward tracking includes: respectively establishing a pyramid Pymd and a PymdLast for a cut image ShiftedImg and a previous cut image ShiftedImgLast by taking the L-K pyramid as a prototype; and (3) searching a pixel point which is most similar to the first characteristic point at the same layer of the pyramid PymdLast by taking the first characteristic point at the top layer of the pyramid Pymd as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel point as an initial value, and solving a first offset characteristic matrix MatrixForward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of one characteristic point.
9. The method of claim 6,
in step 53, the method of back tracking includes: respectively establishing a pyramid PymdLast and a pyramid Pymd for a previous frame of a cut image ShiftedImgLast and a cut image ShiftedImg by taking an L-K pyramid as a prototype; and (3) searching a pixel point which is most similar to the second feature point at the same layer of the pyramid Pymd last by taking the second feature point at the top layer of the pyramid Pymd as a reference, iterating towards the lower layer of the pyramid by taking the coordinates of the pixel point as an initial value, and solving a second offset feature matrix Matrixback ward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of one feature point.
10. The method of claim 6,
in step 54, the culling method includes: subtracting the matrix MatrixForward and the matrix Backward of the two matrixes, and calculating the absolute value of the result to obtain a new matrix MatrixDiff; and traversing each element of the new matrix Matrix diff, and if the value of a certain element is larger than a threshold value round _ trip _ diff _ thresh, removing the point, wherein the threshold value round _ trip _ diff _ thresh ranges from 0 to 1 and has the unit of pixel.
11. The method of claim 10, wherein the threshold round trip diff thresh has a value of 0.1-0.5.
12. The method according to any of claims 1-5, wherein step 6 comprises the sub-steps of:
step 61: carrying out amplitude limiting and directional filtering on the first vector (VxRaw, VyRaw), and removing unreasonable values to obtain a second vector (VxLmt, VyLmt);
step 62: sending the second vector (VxLmt, VyLmt) into a median filter and a mean filter in sequence to obtain a smoothed third vector (VxFlt, VyFlt);
and step 63: adding the offset (VxLast, VyLast) of the motion compensation to the third vector (VxFlt, VyFlt), and restoring to a fourth vector (VxUnshifted, VyUnshifted) before the motion compensation; wherein the fourth vector is used as the motion compensation offset (VxLast ', VyLast') in step 41 during the next round of operation;
step 64: multiplying the fourth vector (VyUnshifted) by the zoom factor binding returns to the fifth vector (vyunbining) before zooming, i.e., the optical flow velocity (vyunbining) of the motion device.
13. The method of claim 12,
in step 61, the method for performing slice processing on the first vector (VxRaw, VyRaw) includes:
if the value of | VxRaw | is larger than the threshold value delta _ X _ max or | VyRaw | is larger than the threshold value delta _ Y _ max, the point is removed as invalid data; wherein the thresholds delta _ X _ max and delta _ Y _ max range from 1 to 5.
14. The method of claim 13, wherein the thresholds delta _ X _ max and delta _ Y _ max range from 2 to 4.
15. The method of claim 6, further comprising the steps of:
and 7: image evaluation and anomaly warning
Obtaining flight state information and scene information according to the image texture quality, the fuzzy degree and the brightness information obtained in the step 51 and by combining the Confidence obtained in the step 52; obtaining abnormal early warning information when the algorithm is invalid;
and 8: data output
And (4) sending the optical flow velocity (VvUnbinding, VyUnbinding) obtained in the step (5-7), the Confidence coefficient, the flight state, the scene information and the abnormality early warning information to a flight control program.
16. A motion control system based on optical flow algorithms, comprising:
an image acquisition module:
the image acquisition module is used for reading a frame of image from the camera, converting the frame of image into a gray level image and recording the gray level image as ImgGrey;
an image preprocessing module:
the image preprocessing module is used for intercepting a sub-image with the size of ClippedSizeX x ClippedSizeY from the center of the image of the gray scale image and recording the sub-image as ClippedImg; wherein ClippedSizeX is the first side length of the subgraph, and ClippedSizeY is the second side length of the subgraph; the sub-graph ClippedImmg is zoomed and binned to be a small graph with the size ResizedSizeX x ResizedSizeY and is marked as ResizedImg; wherein binding is a zoom factor, resize sizex is a first side length of the thumbnail, and resize sizey is a second side length of the thumbnail;
a frame skipping judgment module:
the frame skipping judgment module is used for calculating the accuracy of the latest N times of optical flow operation; if the accuracy is higher than the first threshold value and the previous frame is not skipped, skipping the current frame and continuing to use the previous data; otherwise, the frame is not skipped, and the following operation is continued, wherein N is an integer;
a motion compensation module:
motion compensation module for
Cropping the minimap resize img into a cropped image of size (resize sizex- | VxLast |) x (resize sizey- | vyl last |) according to the last effective optical flow velocity (VxLast, vyl), which is denoted as shifted img, and representing the rectangular range of the cropped image shifted img by (left1, right1, top1, bottom 1); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left1, right1, top1 and bottom1 respectively represent the coordinates of the left, right, top and bottom sides of the clipped image shiftedmimg;
cutting an image of an adjacent previous frame minimap resized imgl, and obtaining a previous frame cut image with the size of (resized sizex- | VxLast |) x (resized sizey- | vyl ast |), and recording the previous frame cut image as shifted imgl; a rectangular range of the previous frame cropping image shiftedmingllast is represented by (left2, right2, top2, bottom 2); wherein VxLast is the speed of the last effective optical flow speed in the X direction; VyLast is the speed in the Y direction of the last effective optical flow speed; the | VxLast | is the absolute value of the speed in the X direction of the last effective optical flow speed; the absolute value of the velocity in the Y direction of the last effective optical flow velocity is | VyLast |; left2, right2, top2 and bottom2 respectively represent the coordinates of the left edge, the right edge, the top edge and the bottom edge of the previous frame of the clipped image shiftedImgLast;
sending the cut image ShiftedImg and the previous cut image ShiftedImgLast into an optical flow pyramid;
a calculate optical flow module:
the optical flow calculating module is used for calculating the clipped image ShiftedImg and the clipped image ShiftedImgLast of the previous frame to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image;
a data post-processing module:
the data post-processing module is used for carrying out amplitude limiting and filtering processing on the first vector (VxRaw, VyRaw), carrying out motion compensation and scaling, and obtaining an optical flow velocity (VxUnbinding, VyUnbinding);
a sending module:
the transmitting module is used for transmitting the optical flow velocity (VvUnbinding) to a flight control system.
17. The system of claim 16,
the cutting method for cutting the minimap resisedimg to obtain the cut image shiftedlimg comprises the following steps:
if the speed VxLast in the direction of the last effective optical flow speed X is greater than 0, the left coordinate left1 of the cut image is made to be 0, and the right coordinate right1 of the cut image is made to be ResizedSizeX- | VxLast |; otherwise, the left coordinate left1 of the cropping image is made to be | VxLast |, and the right coordinate right1 of the cropping image is made to be resize sizex;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, the top edge coordinate top1 of the cut image is made to be 0, and the bottom edge coordinate bottom1 of the cut image is resized SizeY- | VyLast |; otherwise, let top side coordinate top1 of the cropped image be | vyl last |, and bottom side coordinate bottom1 of the cropped image be ResizedSizeY.
18. The system of claim 16,
the clipping method for clipping the previous frame thumbnail to obtain the previous frame clipped image shiftedImgLast comprises the following steps:
if the speed VxLast in the last effective optical flow speed X direction is greater than 0, the left coordinate left2 of the previous frame of the cut image is made VxLast, and the right coordinate right2 of the previous frame of the cut image is made ResizedSizeX; otherwise, the left coordinate left2 of the previous frame of the cropping image is made to be 0, and the right coordinate right2 of the previous frame of the cropping image is made to be resize sizex- | VxLast |;
if the speed VyLast in the Y direction of the last effective optical flow speed is greater than 0, enabling the top side coordinate top2 of the previous frame of the cut image to be VyLast and the bottom side coordinate bottom2 of the previous frame of the cut image to be ResizedSizeY; otherwise, the top side coordinate top2 of the previous frame of the cropping image is made to be 0, and the bottom side coordinate bottom2 of the previous frame of the cropping image is made to be resized sizey- | vyl |.
19. The system of claim 16, wherein the calculate optical flow module is configured to:
applying an improved Harris point selection algorithm to a previous frame of cut image ShiftedImgLast, and judging whether a specified area of the cut image is an angular point, an edge or a smooth area; meanwhile, texture quality, fuzzy degree and brightness information are obtained according to the density degree of the angular points;
forward tracking to obtain a first offset characteristic matrix forward between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of a characteristic point;
performing backward tracking to obtain a second offset feature matrix between two frames of images, wherein each element in the matrix represents a motion vector (Vx, Vy) of a feature point;
comparing two offset matrixes MatrixForward and MatrixBackward obtained in the forward tracking step and the backward tracking step, and eliminating elements with overlarge difference values to obtain an effective characteristic matrix MatrixValid; meanwhile, according to the number of the elements to be removed, obtaining Confidence; wherein, the confidence coefficient is 1- (the number of the elements removed from the current matrix/the total number of the elements);
and averaging all elements in the effective feature matrix to obtain a first vector (VxRaw, VyRaw) representing the integral offset of the image.
20. The system of claim 16, wherein the data post-processing module is configured to:
carrying out amplitude limiting and directional filtering on the first vector (VxRaw, VyRaw), and removing unreasonable values to obtain a second vector (VxLmt, VyLmt);
sending the second vector (VxLmt, VyLmt) into a median filter and a mean filter in sequence to obtain a smoothed third vector (VxFlt, VyFlt);
adding the offset (VxLast, VyLast) of the motion compensation to the third vector (VxFlt, VyFlt), and restoring to a fourth vector (VxUnshifted, VyUnshifted) before the motion compensation;
multiplying the fourth vector (VyUnshifted) by the zoom factor binding returns to the fifth vector (vyunbining) before zooming, i.e., the optical flow velocity (vyunbining) of the motion device.
CN201910519529.7A 2019-06-17 2019-06-17 Motion control method and system based on optical flow algorithm Active CN110660086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910519529.7A CN110660086B (en) 2019-06-17 2019-06-17 Motion control method and system based on optical flow algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910519529.7A CN110660086B (en) 2019-06-17 2019-06-17 Motion control method and system based on optical flow algorithm

Publications (2)

Publication Number Publication Date
CN110660086A CN110660086A (en) 2020-01-07
CN110660086B true CN110660086B (en) 2022-01-04

Family

ID=69028630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910519529.7A Active CN110660086B (en) 2019-06-17 2019-06-17 Motion control method and system based on optical flow algorithm

Country Status (1)

Country Link
CN (1) CN110660086B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991386A (en) * 2021-02-20 2021-06-18 浙江欣奕华智能科技有限公司 Optical flow tracking device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106200672A (en) * 2016-07-19 2016-12-07 深圳北航新兴产业技术研究院 A kind of unmanned plane barrier-avoiding method based on light stream
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107222662A (en) * 2017-07-12 2017-09-29 中国科学院上海技术物理研究所 A kind of electronic image stabilization method based on improved KLT and Kalman filtering
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106200672A (en) * 2016-07-19 2016-12-07 深圳北航新兴产业技术研究院 A kind of unmanned plane barrier-avoiding method based on light stream
CN108204812A (en) * 2016-12-16 2018-06-26 中国航天科工飞航技术研究院 A kind of unmanned plane speed estimation method
CN106989744A (en) * 2017-02-24 2017-07-28 中山大学 A kind of rotor wing unmanned aerial vehicle autonomic positioning method for merging onboard multi-sensor
CN107222662A (en) * 2017-07-12 2017-09-29 中国科学院上海技术物理研究所 A kind of electronic image stabilization method based on improved KLT and Kalman filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Optical flow estimation for motion-compensated compression;wei chen等;《image and vision computing》;20130331;第275-289页 *
一种基于全局运动补偿的HS光流检测算法;叶春明;《光学与光电技术》;20151031;第87-92页 *

Also Published As

Publication number Publication date
CN110660086A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN109635685B (en) Target object 3D detection method, device, medium and equipment
US10339386B2 (en) Unusual event detection in wide-angle video (based on moving object trajectories)
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
TWI420906B (en) Tracking system and method for regions of interest and computer program product thereof
CN111209825B (en) Method and device for dynamic target 3D detection
EP2713310A2 (en) System and method for detection and tracking of moving objects
CN103186887B (en) Image demister and image haze removal method
EP3070430B1 (en) Moving body position estimation device and moving body position estimation method
CN110458877B (en) Navigation method based on bionic vision for fusing infrared and visible light information
CN110570457B (en) Three-dimensional object detection and tracking method based on stream data
US11170524B1 (en) Inpainting image feeds of operating vehicles
CN111829484B (en) Target distance measuring and calculating method based on vision
KR20210043628A (en) Obstacle detection method, intelligent driving control method, device, medium, and device
KR20190030474A (en) Method and apparatus of calculating depth map based on reliability
Braut et al. Estimating OD matrices at intersections in airborne video-a pilot study
CN113744315B (en) Semi-direct vision odometer based on binocular vision
JP2021528732A (en) Moving object detection and smart driving control methods, devices, media, and equipment
CN110599522A (en) Method for detecting and removing dynamic target in video sequence
CN108844538A (en) Unmanned aerial vehicle obstacle avoidance waypoint generation method based on vision/inertial navigation
CN110660086B (en) Motion control method and system based on optical flow algorithm
CN116719339A (en) Unmanned aerial vehicle-based power line inspection control method and system
Zafarifar et al. Horizon detection based on sky-color and edge features
CN111461008B (en) Unmanned aerial vehicle aerial photographing target detection method combined with scene perspective information
Liu et al. A joint optical flow and principal component analysis approach for motion detection
CN112102412B (en) Method and system for detecting visual anchor point in unmanned aerial vehicle landing process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant