CN116862944A - Real-time electronic image stabilization method and system for ornithopter flying robot - Google Patents

Real-time electronic image stabilization method and system for ornithopter flying robot Download PDF

Info

Publication number
CN116862944A
CN116862944A CN202310768372.8A CN202310768372A CN116862944A CN 116862944 A CN116862944 A CN 116862944A CN 202310768372 A CN202310768372 A CN 202310768372A CN 116862944 A CN116862944 A CN 116862944A
Authority
CN
China
Prior art keywords
image
optical flow
grid
flow information
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310768372.8A
Other languages
Chinese (zh)
Other versions
CN116862944B (en
Inventor
付强
贺威
常蓉峰
刘胜南
吴晓阳
王久斌
何修宇
黄海丰
邹尧
刘志杰
黄鸣阳
李擎
张辉
张春华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202310768372.8A priority Critical patent/CN116862944B/en
Publication of CN116862944A publication Critical patent/CN116862944A/en
Application granted granted Critical
Publication of CN116862944B publication Critical patent/CN116862944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention discloses a real-time electronic image stabilizing method and a system for an ornithopter flying robot, wherein the method comprises the following steps: acquiring a video stream acquired in real time when the flapping wing flying robot performs flying aerial photography; respectively carrying out grid division on each frame of image in the video stream, and respectively carrying out equidistant sampling on pixel points in each frame of image to obtain characteristic points; determining optical flow information of each feature point in each frame of image based on a preset optical flow estimation network; filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex, thereby obtaining an original motion path; combining the flapping wing movement period, carrying out smooth filtering on the original movement path to generate a smooth path; and then, performing reverse position compensation on the image sequence according to the generated smooth path to generate a stabilized video stream. The method has small calculated amount and real-time processing capability, and can better meet the requirement of the flapping-wing flying robot on the stability of the real-time aerial photography picture.

Description

Real-time electronic image stabilization method and system for ornithopter flying robot
Technical Field
The invention relates to the technical field of aerial video stabilization processing of flapping-wing flying robots, in particular to a real-time electronic image stabilization method and system for the flapping-wing flying robots.
Background
The flapping wing flying robot is a bionic aircraft which rises in recent years, has unique concealment, confusion and high maneuverability, so that the bionic aircraft has advantages compared with fixed wings and gyroplanes in application environments such as reconnaissance, latent tracking and the like, but has the characteristics of changeable gesture and different flapping rules under different movement modes in flight, so that difficulties are brought to cruise tasks based on the flapping wing aircraft, and the design of systems such as control, visual perception and the like of the aircraft face challenges.
Unlike fixed-wing, rotorcraft, flapping-wing flying robots rely on flapping wings to generate the motive force of the flight, which results in greater heave of the fuselage during the flight. This gives a very bad effect on visual imaging, and when the flapping frequency is too fast, the video shot by the camera is difficult to watch along with intense shake, and the follow-up visual task is difficult. The aerial video in the flapping wing motion needs to be subjected to image stabilization, and subsequent tasks such as reconnaissance and detection require real-time transmission of pictures, so that the complexity of an image stabilization algorithm is required.
The mainstream electronic image stabilization algorithm mainly comprises three steps of motion estimation, motion filtering and motion compensation of video streams. Based on this, many different algorithmic ideas are derived, such as full frame conversion method of 2D processing of images as planes, 3D algorithm considering scene depth, 2.5D image stabilization algorithm tried by researchers, etc. The raster method is a 2D algorithm for dividing an image into a plurality of rectangular areas, and treating a scene in each small rectangular area as a plane so as to perform stable mapping. The method has the advantage of high speed of a 2D algorithm, simultaneously solves the problem that the image stabilizing effect is influenced due to large deviation of depth of field by dividing subareas, and can achieve good image stabilizing effect while meeting the real-time requirement.
For the problem of motion estimation, there are classical optical flow estimation algorithms and methods for dense optical flow estimation using deep neural networks. For example, lucas-Kanade (LK) sparse optical flow algorithm, a flowet series optical flow estimation network, and the like. Classical sparse optical flow has a faster processing speed, but in recent years, an optical flow estimation mode based on a neural network has higher precision, and the optical flow is calculated as dense optical flow, namely a speed vector of each pixel point in an image. However, the application of the optical flow estimation network with better effect is limited to a certain extent due to the fact that the optical flow estimation network with better effect is relatively large in reasoning time.
Disclosure of Invention
The invention provides a real-time electronic image stabilizing method and a system for an ornithopter robot, which aim to solve the technical problems that the existing dense optical flow estimation network is large in calculated amount and difficult to be suitable for electronic image stabilizing tasks of the ornithopter robot requiring real-time performance, the optical flow point position calculated by the traditional sparse optical flow is uncertain and low in calculation accuracy, and cannot be stably applied to a grid algorithm.
In order to solve the technical problems, the invention provides the following technical scheme:
in one aspect, the invention provides a real-time electronic image stabilization method for an ornithopter robot, which comprises the following steps:
acquiring a video stream acquired in real time when the flapping wing flying robot performs flying aerial photography;
respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
Taking a first frame image in the video stream as a reference, starting from a second frame image, and sequentially determining optical flow information of each feature point in each frame image based on a preset optical flow estimation network;
filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
and performing reverse position compensation on the image sequence according to the generated smooth path to generate a stabilized video stream.
Further, the optical flow estimation network is a convolutional neural network;
the input of the optical flow estimation network is continuous two-frame images received in real time, and the optical flow information of the characteristic points in the current image relative to the previous frame image is output.
Further, the filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex includes:
For an image of optical flow information of grid vertices to be calculated, all feature points in the image are segmented according to the number of the grid vertices, and each segmented region covers one grid vertex and comprises a plurality of feature points;
and carrying out median filtering on the optical flow information of all the characteristic points in the partition where each grid vertex is located, and taking the result obtained by median filtering as the optical flow information of the corresponding grid vertex.
Further, the formula for smoothing the original motion path in combination with the flapping wing motion cycle is as follows:
A·F=V;
wherein A is a 2 XM 2 Row, T A Two-dimensional matrix of columns, M 2 Representing the total number of grid vertexes after the image is divided into grids, T A Representing the flapping wing motion cycle of the flapping wing flying robot; f represents a filter, which is T A X 1 column vectors for smoothing each row of data for a; v is a T A The X1 column vector records the smooth position of the original motion path of each grid vertex after smooth filtering;
a is represented by the following formula:
wherein X is ji ,Y ji Respectively representing an x-axis component and a y-axis component of optical flow information of the jth grid vertex in the (i+1) th frame image; j=1, 2, M 2 ,i=1,2,...,T A
F is expressed by the following formula:
Wherein γ represents a weight adjustment parameter; f (gamma) is a sum term representing the weight of the current time data during path filtering; and p, q, f (γ) are satisfied: (1-pγ) + … + (1-2γ) + (1- γ) + (1+f (γ)) + (1- γ) + … + (1-qγ) =t A
On the other hand, the invention also provides a real-time electronic image stabilizing system for the ornithopter robot, which comprises:
the image preprocessing module is used for acquiring video streams acquired in real time when the ornithopter flying robot performs flying aerial photography; respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
the characteristic point optical flow information estimation module is used for sequentially determining optical flow information of each characteristic point in each frame image based on a preset optical flow estimation network from a second frame image by taking a first frame image in the video stream as a reference;
The grid vertex motion estimation module is used for obtaining optical flow information of each grid vertex by filtering the optical flow information of the feature points near each grid vertex generated by the feature point optical flow information estimation module; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
and the motion compensation module is used for carrying out reverse position compensation on the image sequence according to the smooth path generated by the grid vertex motion estimation module, and generating a stabilized video stream.
Further, the optical flow estimation network is a convolutional neural network;
the input of the optical flow estimation network is continuous two-frame images received in real time, and the optical flow information of the characteristic points in the current image relative to the previous frame image is output.
Further, the filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex includes:
for an image of optical flow information of grid vertices to be calculated, all feature points in the image are segmented according to the number of the grid vertices, and each segmented region covers one grid vertex and comprises a plurality of feature points;
And carrying out median filtering on the optical flow information of all the characteristic points in the partition where each grid vertex is located, and taking the result obtained by median filtering as the optical flow information of the corresponding grid vertex.
Further, the formula for smoothing the original motion path in combination with the flapping wing motion cycle is as follows:
A·F=V;
wherein A is a 2 XM 2 Row, T A Two-dimensional matrix of columns, M 2 Representing the total number of grid vertexes after the image is divided into grids, T A Representing the flapping wing motion cycle of the flapping wing flying robot; f represents a filter, which is T A X 1 column vectors for smoothing each row of data for a; v is a T A The X1 column vector records the smooth position of the original motion path of each grid vertex after smooth filtering;
a is represented by the following formula:
wherein X is ji ,Y ji Respectively representing an x-axis component and a y-axis component of optical flow information of the jth grid vertex in the (i+1) th frame image; j=1, 2, M 2 ,i=1,2,...,T A
F is expressed by the following formula:
wherein γ represents a weight adjustment parameter; f (gamma) is a sum term representing the weight of the current time data during path filtering; and p, q, f (γ) are satisfied: (1-pγ) + … + (1-2γ) + (1- γ) + (1+f (γ)) + (1- γ) + … + (1-qγ) =t A
The technical scheme provided by the invention has the beneficial effects that at least:
the invention provides a real-time electronic image stabilizing scheme for grid motion estimation based on estimating the real-time optical flow of the specific position of the image for the first time, and provides a scheme for performing motion filtering aiming at the specific flapping wing motion frequency of the flapping wing flying robot for the first time. The method comprises the steps of estimating the real-time optical flow of uniformly distributed characteristic points on an image through a deep neural network, integrating and filtering the speeds of the grid vertexes close to the optical flow points to obtain the speeds of all the vertexes of the grid, so that the problem that the dense optical flow estimation speed is too slow in a real-time application scene is solved, meanwhile, the number of the optical flow points is ensured to be enough, and the randomness of the positions of the characteristic points of the traditional sparse optical flow is avoided. In addition, the flapping frequency of the flapping-wing robot is utilized to design a motion filtering algorithm, and the filtering processing is carried out on the aerial video flow path according to the flapping rule of the flapping-wing flying robot, so that a better smooth path is obtained, and the image sequence is subjected to reverse position compensation, namely image mapping, according to the found smooth path, so that a stable video flow is generated. The problem that the characteristic tracking is difficult to distinguish from the intended movement in the video due to the fact that the flapping wing movement brings a large-amplitude long-time image shaking is avoided in a short time. Based on the method, the problem that the aerial video of the ornithopter robot is difficult to watch due to the fact that the aerial video of the ornithopter robot has obvious periodic shake is solved, the requirements on the accuracy and the instantaneity of electronic image stabilization of the aerial video of the ornithopter robot can be met, the algorithm of each part is low in complexity, small in calculated amount and simple in structure, the method has the capability of real-time processing, and the method is suitable for real-time use when the ornithopter robot executes a cruise task.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an execution flow of a real-time electronic image stabilization method for an ornithopter-oriented flying robot provided by an embodiment of the invention;
FIG. 2 is a schematic diagram of an image processing principle provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of mesh vertex velocity calculation according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a two-dimensional velocity filtering operation for mesh vertices provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a complete electronic image stabilization process according to an embodiment of the present invention;
fig. 6 is a block diagram of a real-time electronic image stabilization system for an ornithopter-oriented flying robot provided by an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
First embodiment
In order to meet the requirement of the flapping-wing flying robot on stabilizing the real-time aerial photo, the embodiment provides a real-time electronic image stabilizing method for the flapping-wing flying robot, and a set of real-time electronic image stabilizing algorithm for solving the problem that the aerial photo shakes in the flying task of the flapping-wing flying robot are difficult to view is designed by combining an image debouncing algorithm with the flying motion characteristics of the flapping-wing flying robot. The algorithm aims at carrying out real-time stable transformation processing on images through key technologies such as motion detection, motion filtering and the like on aerial pictures in flight, and provides high-efficiency and reliable real-time image stabilizing capability of the flapping wing flying robot. In this embodiment, based on the traditional grid algorithm, the pixel point speeds at a certain position on the image are calculated by using a convolutional neural network, the real-time speeds of the grid vertices are further calculated by using the optical flow points uniformly distributed on the image, on the other hand, the flying flapping wing motion period of the flapping wing flying robot is fused with a stable image filtering algorithm, and a targeted path filtering algorithm is designed aiming at the flying characteristics of the flapping wing flying robot, so that the problems that the traditional dense optical flow network calculated amount is large and difficult to apply on platforms with high real-time requirements such as the flapping wing flying robot and the problem that the long-time large-amplitude periodic shaking in the aerial video of the flapping wing flying robot is difficult to be removed well by using image feature tracking are solved. And processing the aerial video stream acquired by the aircraft in real time to obtain the video stream with jitter noise removed. The method has the advantages of simple structure, small calculated amount, high instantaneity, accurate calculation, good filtering effect and the like, and can better meet the real-time electronic image stabilization requirement of the flapping-wing flying robot.
The method can be realized by the electronic equipment, and the execution flow of the method is shown in fig. 1, and comprises the following steps:
s1, acquiring a video stream acquired in real time when the flapping wing flying robot performs flying aerial photography;
the video stream is collected by a visual perception module of the ornithopter robot.
S2, respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
it should be noted that, the algorithm of the present embodiment is based on a conventional 2D grid algorithm, and divides the complete image into n×n (n=4 in the present embodiment) grid areas. And respectively carrying out stable mapping on each small region to obtain a final complete stable image. According to the principle of spatial consistency, namely that spatially adjacent pixel points have similar motion rules, sub-images which are divided into small enough sub-images can be approximately regarded as a plane, so that adverse effects of different depth of field on image stabilization are avoided to a certain extent.
On the other hand, the algorithm determines equally-spaced positions on the complete image, samples pixel points, and the pixel points at the positions are feature points tracked by the optical flow estimation network.
S3, starting from a second frame image by taking a first frame image in the video stream as a reference, and sequentially determining optical flow information of each feature point in each frame image based on a preset optical flow estimation network;
after the image is divided, the optical flow estimation network calculates the motion speed of the pixel point at the corresponding position of the following frame in sequence by taking the first input image as a reference, namely optical flow information, which is a two-dimensional vector containing two degrees of freedom of x and y, the size of the two-dimensional vector represents the motion distance of the pixel point, and the direction of the two-dimensional vector represents the motion direction of the pixel point. Thus, a sparse optical flow field uniformly distributed over the current image is obtained. When the optical flow points uniformly distributed on the image are calculated, the number of the optical flow points is larger than the number of the divided grid top points; the optical flow information for the grid vertices is then computed by the sparse optical flow field on the image, rather than directly predicting the optical flow for the grid vertices. This makes the optical flow of adjacent grid vertices, which is ultimately solved by spatially consistent regional optical flow, have a natural constraint relationship, unlike the scheme of setting constraint terms in the loss function during network training.
Specifically, in this embodiment, the optical flow estimation network is a convolutional neural network formed by convolution and full connection operation, and the input of the optical flow estimation network is two continuous frames of images transmitted in real time, and the output of the optical flow estimation network is real-time optical flow information of pixel points at specific positions of the images, which is sparse but is necessarily uniformly distributed on the images; unlike classical dense optical flow estimation networks, the network does not follow the encoding and decoding structures designed by other dense optical flow networks from large to small and from small to large, because the instantaneous speed of each pixel point of an image does not need to be estimated, and only the speed of pixels at the relevant positions of grid vertices is concerned.
The optical flow estimation network predicts the positions of the optical flow points by sampling the original image at equal intervals, and only calculates the instantaneous speeds of the pixels at the discrete positions, thereby avoiding the excessive calculated amount and longer processing time for calculating the instantaneous speeds of all the pixels of the image. On the other hand, it ensures the stability of the position of the optical flow point sought. The traditional sparse optical flow algorithm only matches a small part of discrete feature points in the image sequence so as to obtain the optical flow, the feature points are different along with the different positions of the image scene and the image features, and the sufficient optical flow vector is difficult to ensure in the area with insignificant image features.
The specific image processing principle flow is shown in fig. 2, when the optical flow estimation of the pixel point is performed, firstly, the position of a certain pixel point of the current image in the previous frame image is searched, after the corresponding pixel point pair is matched, the vector led from the position of the pixel point in the previous frame image to the position of the current frame is the current optical flow of the pixel point, the optical flow direction is the movement direction of the pixel point, and the optical flow size represents the movement distance of the pixel point.
Further, the optical flow estimation network is a conventional ResNet50 network. The method mainly comprises modules of convolution, activation, full connection and the like, wherein the number of input channels is 6, and the subsequent full connection layer predicts 20×20×2, namely 400 optical flow points and 800 motion data. After the algorithm divides the image into N multiplied by N grids, the speed vector of 20 multiplied by 20 pixel points at special positions relative to the previous frame on the whole image is calculated.
Position coordinates (x) Feature points ,y Feature points ) Determined by the following formula:
y feature points =S(h)
x Feature points =S(w)
Where S (°) represents an equidistant sampling function, S (h) represents the pixel position of the required optical flow from 0 to be determined at h intervals, and S (w) represents the pixel position of the required optical flow from 0 to be determined at w intervals. H. W is the height and width of the original image, and cell_num_h and cell_num_w are the number of special points to be taken on the set horizontal and vertical coordinates, respectively, in this embodiment, 20 are taken, and Fl is a downward rounding function, so as to avoid that the coordinates of the special points exceed the image size.
When cellnum_h=cellnum_w=20, corresponding to the output 800 of the optical flow estimation network, the final fully connected layer of the network needs to be properly adjusted if the parameters of cellnum_h and cellnum_w are adjusted.
This is a new approach in real-time electronic image stabilization, which, unlike conventional approaches for motion estimation via sparse optical flow, ensures that the tracked optical flow points must be uniformly distributed over the image, providing sufficient optical flow information for subsequent processing of sub-images within each grid. While avoiding the large computational effort and long processing time of classical dense optical flow estimation networks such as PWCnet, flownet. If a dense optical flow estimation network is used, the final result of the calculation is the velocity of (h·w) pixels, and if the original resolution is 640×480, the instantaneous velocity of 307200 points needs to be calculated. In a real-time image stabilization scene, instead of filtering and stably mapping each pixel point separately, a large amount of data calculated by dense optical flow becomes redundant data in the application scene, and only the speed of 20×20=400 points is calculated according to the algorithm, so that an electronic image stabilization algorithm designed based on a special point optical flow estimation network has the capability of real-time processing, which allows us to simplify a model or select a simple model to perform optical flow estimation while ensuring accuracy.
In addition, unlike off-line processing algorithm, the motion speed of the pixel point at the constant position of the image estimated by using the special point light flow estimation network only needs to forward infer the input image pair sent into the convolutional neural network once, and does not need to calculate the light flow of each sub-image in a cyclic iteration way. By directly predicting the corresponding positional optical flow through the convolutional neural network, the operation is more direct and disposable. At present, no mature real-time electronic image stabilizing scheme for directly predicting dense optical flows at equal intervals of images by using a convolutional neural network so as to obtain the grid vertex speed exists.
S4, filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
it should be noted that, the grid algorithm divides each frame of image into n×n grids, and the position conversion of each grid needs to determine the position conversion relationship of four vertices. According to the principle of space consistency in optical flow estimation, the optical flow of the vertex, namely the instantaneous speed of the vertex, can be obtained by filtering special optical flow points near the vertex, so that the speeds and the motion paths of all grid vertices of the whole image are obtained.
Specifically, in the present embodiment, the process of obtaining the optical flow information of each mesh vertex by filtering the optical flow information of the feature points near each mesh vertex is specifically: aiming at an image of optical flow information of grid vertices to be calculated, after optical flow points uniformly distributed on the image are calculated, performing secondary treatment on the optical flow points, dividing sparse optical flow uniformly distributed on the whole image into blocks according to the number of the grid vertices, wherein each divided region covers one grid vertex and comprises a plurality of characteristic points, and the characteristic points are used as space consistency regions of the grid vertices; the sparse optical flow is then filtered in the spatially coherent region, and the result is taken as the relative movement of the mesh vertices with respect to the previous frame, i.e., optical flow, which is a two-dimensional motion vector.
As shown in fig. 3, 400 optical flow points obtained in S2 are uniformly distributed on the whole image (the real-time optical flow of the corresponding points is indicated by arrows in the figure), and the black grid divides the whole image into n×n sub-images. To perform stable transformation on an image, stable transformation is performed on each sub-image, and stable transformation is performed on the sub-image, that is, optical flow vectors corresponding to four vertex positions of a grid region need to be determined.
The two rectangular boxes located at the upper left corner of the left image in fig. 3 represent the spatial consistency area considered by the grid vertex at the center point position in the speed calculation, namely, the optical flow vector covered by the small rectangular box is taken as the calculation basis of the movement speed of the grid vertex. In a certain space continuous area, the moving amount of the scene is considered to be continuous, no jump occurs, the area is further treated as a plane for processing, and the difference of scene movements with different depth of field is ensured to be high-efficiency and simultaneously considered.
As shown in fig. 3, the grid divides the full image into 4 x 4 sub-regions, each of which is located by four vertices, for a total of 25 grid vertices. Each vertex finds its instantaneous vertex velocity according to the corresponding spatially consistent region. In the spatially uniform region of the grid vertices, a number of evenly distributed light flow points are contained, each of which has a velocity that is a motion vector containing two components dx and dy. dx denotes the distance that the pixel moves in the x-axis direction relative to the position in the previous frame, dy denotes the distance that the pixel moves in the y-axis direction relative to the position in the previous frame, the magnitude of which denotes the movement distance, and positive and negative denote the movement direction.
For each mesh vertex, median filtering is performed on the optical flow in the surrounding area, and the median filtering is expressed by the following formula:
V=f M (v 1 ,v 2 ,…,v n )
where V represents mesh vertex velocity, which is defined by two-dimensional components (V x ,V y ) Composition, V x ,V y Representing the velocity components of the current vertex in the x-direction and the y-direction, f M Representing a median filter, v g Represents the velocity of the g-th optical flow point (g=1, 2, …, n) in the area associated with the vertex, likewise represented by (v) x ,v y ) Composition, v x ,v y The velocity components in the x-direction and the y-direction of the corresponding position of the image are respectively represented, and n represents the number of light flow points in the space consistency area of each grid vertex.
When the sub-image division is small enough, the sub-image scene is approximately the same plane and has similar motion vectors, and the optical flow obtained by the above equation is the optical flow of the grid vertex. Wherein, when the area is selected, the related area of the grid vertex adjacent to the position has a repeated area, namely repeated light flow points, so that when the speed of the grid vertex adjacent to the position is calculated, part of the utilized light flow information is repeated, and the speed of the grid vertex adjacent to the position, which is calculated by the multiplexed light flow information, is also constrained. This is different from the image stabilization scheme of directly predicting mesh vertex motion vectors, and does not require designing a loss function of constraint deformation quantity particularly in network training, which naturally forms a spatial constraint for calculation of the position adjacent mesh vertex speeds. The method has the advantages of simple and direct algorithm and high calculation speed in real-time electronic image stabilization.
After calculating the corresponding velocity for each mesh vertex, optical flows of (n+1) × (n+1) vertices are obtained, as shown in the right-hand diagram of fig. 3. The use of convolutional neural networks to first estimate the optical flow at a sufficient number of specific locations, rather than directly estimating the optical flow at each mesh vertex, has significant benefits: more optical flow data can be used as reference, and the number of grids can be changed within a certain range when the optical flow network is trained only once. For example, a convolutional neural network is used to estimate 20×20 light flow points on the original image, that is, 20 points on each of the horizontal axis and the vertical axis, where the network does not need to be retrained, and the divided grid may be any number smaller than 20, in the extreme case, the image is divided into 19×19 grids, where 20×20 light flow points are vertices of the 19×19 grids.
Further, after the mesh vertex speeds are obtained, each vertex speed needs to be accumulated in a time axis in a filtering period, namely, a flapping wing motion period. This is because continuous real-time image streams are used in the optical flow estimation, each optical flow is calculated from two successive frames of images, the calculated optical flows are relative to the position in the previous frame, and the absolute movement of the optical flows relative to the first frame image is required to be obtained on the time axis A The individual speeds are accumulated to obtain a period T A Original motion path of each mesh vertex within. Wherein each vertex corresponds to a two-dimensional motion vector, and the absolute movement amount after accumulation also has two degrees of freedom of x and y. The motion path obtained at this time is the original motion path of the pixel point on the image, which contains significant and large-amplitude periodic fluctuation, and is shown as periodic shake on the video. For example, dividing an image into 4×4 meshes, there are 5×5 mesh vertices, each with two degrees of freedom, corresponding to the original path of 2×5×5 total 50 mesh vertices, as shown in fig. 4.
In the aerial video of the flapping-wing flying robot, the large-amplitude low-frequency periodic shaking is brought by the flapping-wing movement mode of the flapping-wing flying robot, and the noise has the characteristics of long duration, constant movement direction, large movement amplitude and the like in a short time. Whereas conventional image stabilization algorithms based on feature point tracking tend to take short time into account and most are directed to gaussian noise, it is difficult to remove this unique motion noise because it is very similar to the features of the intended motion of the video. In this connection, it is considered that this oscillation frequency is closely related to the flapping-wing movement frequency of the aircraft. The flapping wing movement frequency information is utilized to participate in path smoothing in electronic image stabilization, so that a stable path of video stream can be directly and rapidly obtained. Therefore, the algorithm realizes the corresponding mesh vertex motion path filtering algorithm by fusing the general flapping frequency in the normal flight task of the flapping-wing aircraft, so as to remove the large-amplitude periodic jitter. Moreover, in actual calculation, the number of grid vertices required to be calculated and the path required to be filtered also increase sharply with the increase of the number of grids. In order to avoid excessive calculation amount of filtering operation, the embodiment designs a new filtering calculation mode, namely, filtering is realized by utilizing matrix multiplication, so that the calculation amount is only one matrix multiplication when the number of grids is changed, and the efficient performance is realized.
Specifically, the embodiment utilizes matrix multiplication to realize the filtering of 50 vertex motion paths, only one matrix multiplication is needed during the filtering, the step of carrying out filtering operation for each vertex loop iteration is avoided, and the method has the characteristics of high operation speed and good instantaneity. The specific implementation process is as follows:
the x-axis and y-axis motion vectors of all mesh vertices are arranged in time order, expressed as:
wherein x is i Representing the displacement in the x-axis direction at the ith moment of the grid vertex, y i I=1, 2, …, which is the displacement in the y-axis direction at the ith moment of the mesh vertex,T A ,T A The flapping wing motion cycle of the flapping wing flying robot;
the absolute movement amount of each vertex relative to the first frame image is expressed as a matrix in sequence, expressed as:
wherein A is 2 XM 2 Row, T A Two-dimensional matrix of columns, m=n+1, M 2 Representing the total number of grid vertices when an image is divided into N grids, 2 XM 2 Indicating that each vertex has velocity components in both the x and y directions. T (T) A The value of (1) is the number of data recorded in each row, and experiments show that when T A The quality of a smooth curve obtained after filtering is best when the video large-amplitude flapping period is equal to that of the flapping robot flapping wing motion period. V (V) jx Representing the result of the j-th vertex x-axis motion vector time sequence, V jy Representing the result of the j-th vertex y-axis motion vector chronological j=1, 2, …, M 2 The method comprises the steps of carrying out a first treatment on the surface of the Total T in matrix A A Column data, i.e. the number of frames of image acquisition during one cycle of flapping wing motion.
Each row of data of matrix a is filtered, the filter being expressed by the following formula:
wherein F represents a filter, which is T A X 1 column vector. T (T) A When different values are taken, the time required by p and q and filtering is also calculatedAnd (5) performing adaptive adjustment. The weight adjustment parameter gamma describes that the weights of the moving amounts at different moments are different when the smooth moving amount at a certain moment is calculated, and the data weights of the starting position and the ending position in the filtering period are smaller when the data weights of the intermediate position are larger. When gamma is taken, only the following two formulas are required to be ensured:
1-pγ>0
1-qγ>0
when γ=0, indicating that all time weights are the same, the above formula becomes:
in addition, f (γ) is a sum term representing the weight occupied by the current time data at the time of path filtering, and is used to ensure that p, q, f (γ) satisfy:
(1-pγ)+…+(1-2γ)+(1-γ)+(1+f(γ))+(1-γ)+…+(1-qγ)=T A
the complete filter formula is expressed as:
A·F=V
the V thus obtained is M 2 And (3) recording the smooth position of the original motion path of each grid vertex after smooth filtering by using the column vector of the X1, so that the flapping wing motion cycle of the flapping wing flying robot can be utilized to carry out targeted motion filtering, and a stable smooth path corresponding to each grid vertex is obtained. In addition, the filtering algorithm designed by the embodiment fuses the cycle information of the flapping wing motion when the flapping-wing flying robot is in normal cruising flight, so that the original path for filtering the peaks of the filter cells can be better filtered, and the periodic noise is greatly reduced.
And S5, performing reverse position compensation on the image sequence according to the smooth path to generate a stabilized video stream.
It should be noted that, according to the above-mentioned filtered smooth path, the final image stabilization mapping may be performed. The complete calculation flow chart is shown in fig. 5. And (3) performing difference between the smooth path of each grid vertex and the original path to obtain the relative offset of the current position and the target position of the grid vertex, and reversely mapping the corresponding subgraph according to the offset of the four vertices of each subgraph to obtain the stabilized complete image stream.
In summary, the embodiment provides a real-time electronic image stabilizing method for a ornithopter robot, which is based on a grid algorithm, performs optical flow real-time estimation of a specific position of an image by using a convolutional neural network, and further performs estimation on grid vertex motion, so that motion filtering and motion compensation are performed, uncertainty of an optical flow point position required by a traditional sparse optical flow algorithm is avoided, meanwhile, larger reasoning time consumption of a traditional dense optical flow depth neural network is avoided, stable image quality is not influenced, and a large amount of calculation is avoided, so that real-time performance is possessed. In the motion filtering stage, the flapping wing motion period and the filtering algorithm are fused in the flight process of the flapping wing flying robot, so that the shaking of the aerial photography picture caused by the flapping wing motion of the flapping wing flying robot is better removed. The new scheme of real-time motion estimation in real-time electronic image stabilization is provided, the requirements of the flapping wing flying robot on real-time image stabilization in a flying task can be met, the shaking noise of aerial pictures can be effectively reduced, the stability of aerial video is improved, the visual experience of an observer is improved, and a stable video stream is provided for visual tasks such as follow-up detection and reconnaissance. The method has certain reference significance for platforms requiring high precision and high performance for stabilizing images, such as flapping wing flying robots.
Second embodiment
The embodiment provides a real-time electronic image stabilizing system for an ornithopter flying robot, the structure of the real-time electronic image stabilizing system for the ornithopter flying robot is shown in fig. 6, and the real-time electronic image stabilizing system comprises the following modules:
the image preprocessing module is used for acquiring video streams acquired in real time when the ornithopter flying robot performs flying aerial photography; respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
the characteristic point optical flow information estimation module is used for sequentially determining optical flow information of each characteristic point in each frame image based on a preset optical flow estimation network from a second frame image by taking a first frame image in the video stream as a reference;
the grid vertex motion estimation module is used for obtaining optical flow information of each grid vertex by filtering the optical flow information of the feature points near each grid vertex generated by the feature point optical flow information estimation module; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
And the motion compensation module is used for carrying out reverse position compensation on the image sequence according to the smooth path generated by the grid vertex motion estimation module, and generating a stabilized video stream.
The real-time electronic image stabilizing system for the ornithopter robot of the embodiment corresponds to the real-time electronic image stabilizing method for the ornithopter robot of the first embodiment; the functions realized by the functional modules in the real-time electronic image stabilizing system of the ornithopter-oriented flying robot correspond to the flow steps in the real-time electronic image stabilizing method of the ornithopter-oriented flying robot of the first embodiment one by one; therefore, the description is omitted here.
Furthermore, it should be noted that the present invention can be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
It is finally pointed out that the above description of the preferred embodiments of the invention, it being understood that although preferred embodiments of the invention have been described, it will be obvious to those skilled in the art that, once the basic inventive concepts of the invention are known, several modifications and adaptations can be made without departing from the principles of the invention, and these modifications and adaptations are intended to be within the scope of the invention. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.

Claims (8)

1. A real-time electronic image stabilization method for an ornithopter type flying robot is characterized by comprising the following steps of:
acquiring a video stream acquired in real time when the flapping wing flying robot performs flying aerial photography;
respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
Taking a first frame image in the video stream as a reference, starting from a second frame image, and sequentially determining optical flow information of each feature point in each frame image based on a preset optical flow estimation network;
filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
and performing reverse position compensation on the image sequence according to the generated smooth path to generate a stabilized video stream.
2. The method for real-time electronic image stabilization for ornithopter-oriented robots of claim 1, wherein the optical flow estimation network is a convolutional neural network; the input of the optical flow estimation network is continuous two-frame images received in real time, and the optical flow information of the characteristic points in the current image relative to the previous frame image is output.
3. The method for real-time electronic image stabilization of an ornithopter-oriented flying robot of claim 1, wherein the filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex comprises:
For an image of optical flow information of grid vertices to be calculated, all feature points in the image are segmented according to the number of the grid vertices, and each segmented region covers one grid vertex and comprises a plurality of feature points;
and carrying out median filtering on the optical flow information of all the characteristic points in the partition where each grid vertex is located, and taking the result obtained by median filtering as the optical flow information of the corresponding grid vertex.
4. The method for real-time electronic image stabilization of a flying ornithopter robot of claim 1, wherein the formula for smoothing the original motion path in combination with the flapping wing motion cycle is:
A·F=V;
wherein A is a 2 XM 2 Row, T A Two-dimensional matrix of columns, M 2 Representing the total number of grid vertexes after the image is divided into grids, T A Representing the flapping wing motion cycle of the flapping wing flying robot; f represents a filter, which is T A X 1 column vectors for smoothing each row of data for a; v is a T A The X1 column vector records the smooth position of the original motion path of each grid vertex after smooth filtering;
a is represented by the following formula:
wherein X is ji ,Y ji Respectively representing an x-axis component and a y-axis component of optical flow information of the jth grid vertex in the (i+1) th frame image; j=1, 2, …, M 2 ,i=1,2,…,T A
F is expressed by the following formula:
wherein γ represents a weight adjustment parameter; f (γ) is a sum term representing the weight of the current time data at the time of path filtering, and p, q, f (γ) are satisfied: (1-pγ) + … + (1-2γ) + (1- γ) + (1+f (γ)) + (1- γ) + … + (1-qγ) =t A
5. A real-time electronic image stabilization system for an ornithopter robot, comprising:
the image preprocessing module is used for acquiring video streams acquired in real time when the ornithopter flying robot performs flying aerial photography; respectively carrying out grid division on each frame of image in the video stream, respectively carrying out equidistant sampling on pixel points in each frame of image, and taking the pixel points obtained by sampling as characteristic points of corresponding images; wherein, grid positions and characteristic point positions in each frame of image are corresponding and consistent; in the same frame of image, the characteristic points are uniformly distributed in the image, and the number of grid vertexes divided in the image is smaller than the number of the characteristic points;
the characteristic point optical flow information estimation module is used for sequentially determining optical flow information of each characteristic point in each frame image based on a preset optical flow estimation network from a second frame image by taking a first frame image in the video stream as a reference;
The grid vertex motion estimation module is used for obtaining optical flow information of each grid vertex by filtering the optical flow information of the feature points near each grid vertex generated by the feature point optical flow information estimation module; in one flapping wing movement period, accumulating the optical flow information of each grid vertex in a time axis to obtain an original movement path of the corresponding grid vertex in the flapping wing movement period; carrying out smooth filtering on the original motion path by combining the flapping wing motion period to generate a smooth path;
and the motion compensation module is used for carrying out reverse position compensation on the image sequence according to the smooth path generated by the grid vertex motion estimation module, and generating a stabilized video stream.
6. The real-time electronic image stabilization system for ornithopter-oriented robots of claim 5, wherein the optical flow estimation network is a convolutional neural network; the input of the optical flow estimation network is continuous two-frame images received in real time, and the optical flow information of the characteristic points in the current image relative to the previous frame image is output.
7. The real-time electronic image stabilization system for an ornithopter robot of claim 5, wherein the filtering the optical flow information of the feature points near each grid vertex to obtain the optical flow information of each grid vertex comprises:
For an image of optical flow information of grid vertices to be calculated, all feature points in the image are segmented according to the number of the grid vertices, and each segmented region covers one grid vertex and comprises a plurality of feature points;
and carrying out median filtering on the optical flow information of all the characteristic points in the partition where each grid vertex is located, and taking the result obtained by median filtering as the optical flow information of the corresponding grid vertex.
8. The real-time electronic image stabilization system for ornithopter-oriented robots of claim 5, wherein the formula for smoothing the original motion path in combination with the ornithopter motion cycle is:
A·F=V;
wherein A is a 2 XM 2 Row, T A Two-dimensional matrix of columns, M 2 Representing the total number of grid vertexes after the image is divided into grids, T A Representing the flapping wing motion cycle of the flapping wing flying robot; f represents a filter, which is T A X 1 column vectors for smoothing each row of data for a; v is a T A The X1 column vector records the smooth position of the original motion path of each grid vertex after smooth filtering;
a is represented by the following formula:
wherein X is ji ,Y ji Respectively representing an x-axis component and a y-axis component of optical flow information of the jth grid vertex in the (i+1) th frame image; j=1, 2, …, M 2 ,i=1,2,…,T A
F is expressed by the following formula:
wherein γ represents a weight adjustment parameter; f (gamma)) Is a sum term representing the weight of the current time data at the time of path filtering, and makes p, q, f (γ) satisfy: (1-pγ) + … + (1-2γ) + (1- γ) + (1+f (γ)) + (1- …) + … + (1-qγ) =t A
CN202310768372.8A 2023-06-27 2023-06-27 Real-time electronic image stabilization method and system for ornithopter flying robot Active CN116862944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310768372.8A CN116862944B (en) 2023-06-27 2023-06-27 Real-time electronic image stabilization method and system for ornithopter flying robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310768372.8A CN116862944B (en) 2023-06-27 2023-06-27 Real-time electronic image stabilization method and system for ornithopter flying robot

Publications (2)

Publication Number Publication Date
CN116862944A true CN116862944A (en) 2023-10-10
CN116862944B CN116862944B (en) 2024-04-26

Family

ID=88222651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310768372.8A Active CN116862944B (en) 2023-06-27 2023-06-27 Real-time electronic image stabilization method and system for ornithopter flying robot

Country Status (1)

Country Link
CN (1) CN116862944B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564554A (en) * 2018-05-09 2018-09-21 上海大学 A kind of video stabilizing method based on movement locus optimization
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
CN111614965A (en) * 2020-05-07 2020-09-01 武汉大学 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN114219835A (en) * 2021-12-09 2022-03-22 超级视线科技有限公司 High-performance image stabilizing method and device based on video monitoring equipment
CN114584785A (en) * 2022-02-07 2022-06-03 武汉卓目科技有限公司 Real-time image stabilizing method and device for video image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564554A (en) * 2018-05-09 2018-09-21 上海大学 A kind of video stabilizing method based on movement locus optimization
CN108805908A (en) * 2018-06-08 2018-11-13 浙江大学 A kind of real time video image stabilization based on the superposition of sequential grid stream
CN111614965A (en) * 2020-05-07 2020-09-01 武汉大学 Unmanned aerial vehicle video image stabilization method and system based on image grid optical flow filtering
CN114219835A (en) * 2021-12-09 2022-03-22 超级视线科技有限公司 High-performance image stabilizing method and device based on video monitoring equipment
CN114584785A (en) * 2022-02-07 2022-06-03 武汉卓目科技有限公司 Real-time image stabilizing method and device for video image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUAICHENG LIU ET AL.: "MeshFlow: Minimum Latency Online Video Stabilization", 《ECCV 2016》, 31 December 2016 (2016-12-31), pages 3 - 4 *

Also Published As

Publication number Publication date
CN116862944B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
Huang et al. Real-time intermediate flow estimation for video frame interpolation
Shi et al. Spsequencenet: Semantic segmentation network on 4d point clouds
CN111476116A (en) Rotor unmanned aerial vehicle system for vehicle detection and tracking and detection and tracking method
CN108230361B (en) Method and system for enhancing target tracking by fusing unmanned aerial vehicle detector and tracker
Samanta et al. Log transform based optimal image enhancement using firefly algorithm for autonomous mini unmanned aerial vehicle: An application of aerial photography
CN105160703B (en) A kind of optical flow computation method using time-domain visual sensor
Ye et al. Unsupervised learning of dense optical flow, depth and egomotion with event-based sensors
CN106981073A (en) A kind of ground moving object method for real time tracking and system based on unmanned plane
CN109715498A (en) Adaptive motion filtering in nobody the autonomous vehicles
WO2016030305A1 (en) Method and device for registering an image to a model
CN108776971A (en) A kind of variation light stream based on layering nearest-neighbor determines method and system
CN112580537B (en) Deep reinforcement learning method for multi-unmanned aerial vehicle system to continuously cover specific area
CN113159466B (en) Short-time photovoltaic power generation prediction system and method
US10482584B1 (en) Learning method and learning device for removing jittering on video acquired through shaking camera by using a plurality of neural networks for fault tolerance and fluctuation robustness in extreme situations, and testing method and testing device using the same
Passalis et al. Deep reinforcement learning for controlling frontal person close-up shooting
WO2010077389A2 (en) Systems and methods for maintaining multiple objects within a camera field-of-view
CN103139568A (en) Video image stabilizing method based on sparseness and fidelity restraining
CN112097769A (en) Homing pigeon brain-hippocampus-imitated unmanned aerial vehicle simultaneous positioning and mapping navigation system and method
CN108462868A (en) The prediction technique of user's fixation point in 360 degree of panorama VR videos
Rabah et al. Heterogeneous parallelization for object detection and tracking in UAVs
CN102685371A (en) Digital video image stabilization method based on multi-resolution block matching and PI (Portion Integration) control
Sharif et al. Improved video stabilization using SIFT-log polar technique for unmanned aerial vehicles
CN111260687B (en) Aerial video target tracking method based on semantic perception network and related filtering
Sikorski et al. Event-based spacecraft landing using time-to-contact
CN116862944B (en) Real-time electronic image stabilization method and system for ornithopter flying robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant