CN112926530A - Jelly effect prevention method and system in aerial photography scene based on artificial intelligence - Google Patents
Jelly effect prevention method and system in aerial photography scene based on artificial intelligence Download PDFInfo
- Publication number
- CN112926530A CN112926530A CN202110355358.6A CN202110355358A CN112926530A CN 112926530 A CN112926530 A CN 112926530A CN 202110355358 A CN202110355358 A CN 202110355358A CN 112926530 A CN112926530 A CN 112926530A
- Authority
- CN
- China
- Prior art keywords
- image
- optical flow
- flow information
- jelly effect
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a jelly effect prevention method and system in an aerial photography scene based on artificial intelligence, and relates to the field of artificial intelligence; the method comprises the steps that a rolling shutter camera collects images; obtaining optical flow information of the image by using an optical flow method, and obtaining moving pixel points and static pixel points in the image according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle; obtaining a jelly effect index according to the optical flow information of the static pixel points and using the jelly effect index as a label training time sequence prediction network of the time sequence prediction network; inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into a time sequence prediction network to predict the jelly effect index of a future frame image; and constructing dynamic characteristics of the image, and determining an adjustment mode and adjusting relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of the current frame. The invention realizes the quality prediction of the shot image, adjusts the parameters of the unmanned aerial vehicle in time to prevent the jelly effect and ensures that a high-quality picture is acquired.
Description
Technical Field
The invention relates to the technical field of artificial intelligence and aerial photography, in particular to a jelly effect prevention method and system in an aerial photography scene based on artificial intelligence.
Background
The rolling shutter is characterized in that the CMOS pixels are exposed gradually, so that the advantage that a higher frame rate can be achieved is that the rolling shutter is frequently used for obtaining clear images when the unmanned aerial vehicle is used for aerial photography.
In practice, the inventors found that the above prior art has the following disadvantages:
since the unmanned aerial vehicle is a device with high-speed motion capability, and there are vibrations of different frequencies in different degrees while shooting, such vibrations are also a high-speed motion for the camera. The high-speed motion causes time difference between pixels in different rows in an image shot by the rolling shutter, so that the image has the phenomena of oblique pulling, distortion, swinging inequality and the like, the phenomenon is called as jelly effect, and the imaging quality of aerial photography is influenced.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a jelly effect prevention method and system in an aerial photography scene based on artificial intelligence, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for preventing a jelly effect in an aerial photography scene based on artificial intelligence, including the following steps:
inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into a time sequence prediction network to predict the jelly effect index of a future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle; taking the area ratio of the moving pixel points and the static pixel points in each frame of image as dynamic characteristics, comparing the dynamic characteristics with a preset threshold value, and judging the weights of the static pixel points and the moving pixel points; determining an adjustment mode and adjusting relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of each frame of image; the selection method of the adjustment mode comprises the following steps: when the weight of the static pixel point is high, the pendant damping is adjusted to the target damping; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjusting network which takes a jelly effect index as input and takes the target damping as output.
Further, the step of analyzing the optical flow information of the static pixel points in each frame of image to obtain the jelly effect index includes: acquiring an optical flow information sequence of each frame of image according to the optical flow information of each row of static pixel points in each frame of image; calculating the difference of optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence;
and determining the jelly effect index according to the mean value and the variance of the optical flow information change sequence.
Further, the step of taking the ratio of the area of the moving pixel point to the area of the static pixel point in each frame of image as the dynamic characteristic comprises the following steps: performing connected domain analysis on the binary image of the image to obtain the area of a static pixel point; transposing a binary image of the image, and performing connected domain analysis to obtain the area of a moving pixel point; and constructing dynamic characteristics according to the areas of the moving pixel points and the static pixel points in the image.
Further, the training step of the pendant damping adjustment network comprises the following steps: the pendant damping is used as a label, IMU reading and jelly effect indexes of the unmanned aerial vehicle are used as a training set, a pendant damping adjusting network is trained, a mean square error loss function is adopted as a loss function, iteration is carried out continuously, and model parameters are updated.
In a second aspect, an embodiment of the present invention provides a system for preventing jelly effect in an aerial photography scenario based on artificial intelligence, including:
the timing sequence prediction network unit is used for inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into the timing sequence prediction network to predict the jelly effect index of the future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle;
the dynamic characteristic construction unit is used for taking the area ratio of the moving pixel point to the static pixel point in each frame of image as a dynamic characteristic, comparing the dynamic characteristic with a preset threshold value and judging the weight of the static pixel point and the moving pixel point;
the adjustment mode determining unit is used for determining an adjustment mode and adjusting relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of each frame of image;
the adjustment mode operation unit is used for adjusting the pendant damping to a target damping when the weight of the static pixel point is high; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjusting network which takes a jelly effect index as input and takes the target damping as output.
Further, the timing prediction network prediction unit includes:
the optical flow information sequence acquisition module is used for acquiring an optical flow information sequence of each frame of image according to the optical flow information of each row of static pixel points in each frame of image;
the optical flow information change sequence acquisition module is used for calculating the difference of optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence;
and the jelly effect index determining module is used for determining a jelly effect index according to the mean value and the variance of the optical flow information change sequence.
Further, the dynamic feature construction unit includes:
the static pixel point area obtaining module is used for carrying out connected domain analysis on the binary image of the image to obtain the area of the static pixel point;
the device comprises a motion pixel point area obtaining module, a motion pixel point area obtaining module and a motion pixel point area obtaining module, wherein the motion pixel point area obtaining module is used for transposing a binary image of an image and carrying out connected domain analysis to obtain the area of a motion pixel point;
and the dynamic characteristic construction module is used for constructing dynamic characteristics according to the areas of the moving pixel points and the static pixel points in the image.
Further, the adjustment mode operation unit includes:
and the pendant damping adjustment network training module is used for training a pendant damping adjustment network by taking the pendant damping as a label and taking IMU (inertial measurement unit) reading and jelly effect indexes of the unmanned aerial vehicle as a training set, wherein the loss function adopts a mean square error loss function, and model parameters are updated through continuous iteration.
The embodiment of the invention at least has the following beneficial effects:
in the embodiment of the invention, the jelly effect is generated by considering the movement of the unmanned aerial vehicle and the photographed object, the invention distinguishes static pixel points and moving pixel points by using an optical flow method, constructs the dynamic characteristics of images to determine the parameter adjustment method of the unmanned aerial vehicle, adjusts the parameters of the unmanned aerial vehicle according to different conditions and avoids the generation of the jelly effect.
The invention provides a method for quantifying a jelly effect, which comprises the following steps: in the picture shot by the rolling shutter, the pixels of different rows have time difference, so that the optical flow information of the static pixels between different rows has difference, and the quantization index of the optical flow information is obtained according to the difference degree.
The whole system realizes the quality prediction of the shot images, adjusts the parameters of the unmanned aerial vehicle in time to prevent the generation of the jelly effect and ensures that high-quality images are acquired.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a method for preventing jelly effect according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of a method for preventing jelly effect according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a general overview of a method for preventing jelly effects according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a process for obtaining a jelly effect indicator according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of acquiring dynamic features of an image according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a jelly effect prevention system according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the jelly effect prevention method and system in an aerial photography scenario based on artificial intelligence according to the present invention with reference to the accompanying drawings and preferred embodiments is provided below. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of a jelly effect prevention method and system based on artificial intelligence in an aerial photography scene, which is provided by the invention, with reference to the accompanying drawings.
Referring to fig. 1, a method for preventing jelly effect in an aerial photography scene based on artificial intelligence according to an embodiment of the present invention is shown, which includes the following steps:
s001, inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into a time sequence prediction network to predict the jelly effect index of a future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of the static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle.
The image is acquired by a rolling shutter camera deployed on the unmanned aerial vehicle, an implementer can set the visual angle of the rolling shutter camera according to actual conditions, and it needs to be explained that the visual angle of the camera is not changed once the visual angle is determined.
The optical flow information of the image is obtained by an optical flow method, the optical flow method is based on the assumption of gray scale invariance and small displacement motion, the motion situation of each pixel point on the image is obtained by utilizing the movement of the pixel point between two adjacent frames of images, the motion situation is the optical flow information, and is marked as (V)x,Vy). It should be noted that the optical flow information obtained by the optical flow method reflects the relative motion between the image and the unmanned aerial vehicle, so in this embodiment, the optical flow information of each pixel point on the image obtained by the optical flow method is compared with the flight speed of the unmanned aerial vehicle, and a stationary pixel point and a moving pixel point in the image are determined.
Obtaining the flight speed of the unmanned aerial vehicle as (v) according to the sensorx,vy) Comparing the optical flow information of each pixel point on the image obtained by the optical flow method with the flight speed of the unmanned aerial vehicle, wherein the formula is as follows:
in the formula (V)x,Vy) (v) optical flow information of each pixel point on the imagex,vy) Is the flight speed of the unmanned aerial vehicle.
When the formula is satisfied, the pixel point is considered as a moving pixel point, otherwise, the pixel point is considered as a static pixel point. It should be noted that the static pixel points are pixel points of static objects such as buildings or stones in the image; in the present embodiment, the value of m is 1, and in the specific implementation, the implementer may change the value according to the actual situation.
The obtained still pixelSetting the pixel value of the point to be 1 and the pixel value of the motion pixel point to be 0 to obtain a binary image marked as I1。
In order to eliminate the influence of the self-motion of the photographed object on the detection of the jelly effect index, the embodiment analyzes the optical flow information of the static pixel points to obtain the jelly effect index, and if the jelly effect exists in the image, the optical flow information of the static pixel points among different lines has a difference due to the scanning time difference among different lines of the static pixel points, and the difference can reflect the severity of the jelly effect of the image, so that the jelly effect is quantized to obtain the jelly effect index W.
The IMU sensor perception unmanned aerial vehicle that carries on the unmanned aerial vehicle can obtain unmanned aerial vehicle's vibrations condition according to the rate of change of triaxial angular velocity in multiframe time at the angular velocity of x, y, the three axis of z. In this embodiment, 5 frames of images are taken, and in a specific implementation, a implementer may change the images according to actual situations. Obtaining an angular velocity sequence of the unmanned aerial vehicle in three axes of x, y and z within 5 frame time according to an IMU sensor carried on the unmanned aerial vehicle, and recording the angular velocity sequence as { phi1,φ2,φ3,φ4,φ5Each phi contains angular velocity values on three axes.
Subtracting the angular velocities of the three axes of the previous frame image from the angular velocity values of the three axes of the image to obtain the change condition of the angular velocity, and calculating the average change rate of the angular velocities of the three axes, and recording the average change rate asThe vibration condition index is used as an index for reflecting the vibration condition of the unmanned aerial vehicle.
In the process of shooting by the unmanned aerial vehicle in flight, multiple groups of data of the flight speed of the unmanned aerial vehicle and the shutter time of the camera are collected, and the flight speed of the unmanned aerial vehicle is recorded as v ═ v (v ═ v)x,vy) The shutter time of the camera is denoted as f. In this embodiment, sampling is performed every 5 frames of images to obtain a set of sample data, which is recorded asWherein i represents the ith groupAnd (4) data.
In this embodiment, K group of data are gathered together in the time of unmanned aerial vehicle flight shooting.
In the embodiment, the jelly effect of the future frame image is predicted by using 5 groups of sample data, so that the parameters of the unmanned aerial vehicle can be adjusted in time, and the jelly effect is avoided.
The description will be given by taking a set of corresponding relations between training data and label data as an example:
wherein, the left side of the arrow is training data, W6In order to obtain the label data by using the jelly effect index obtained by analyzing the optical flow information of the static pixel points in the 6 th group of training images, a plurality of groups of training data sets and corresponding label data can be obtained according to the same corresponding relation. It should be noted that the label data is used for solving a loss value, the loss value is a difference between the actual output of the network and the label data, the training process is to gradually reduce a difference between the output of the network and the label data, and ideally, the trained network output is the label data. The label data can realize automatic labeling of data without human participation.
Constructing a training data set and corresponding label data according to the acquired K groups of data, training a time sequence prediction network, and introducing a training process of the time sequence prediction network by taking TCN as an example in the embodiment:
a) normalizing the training data set and the corresponding label data, and adjusting the training data set and the corresponding label data to a uniform interval to be used as training data;
b) setting the input shape of TCN to [ B, N, 6 ]]The output shape is [ B,1 ]]. In the input shape, B is a Batch size (Batch size) indicating the amount of data to be fed into the network at one time; n is a time scale, and the present embodiment predicts the jelly effect index of the future frame image by using 5 sets of history data, so that N is taken to be 5; 6 represents 6 characteristic values, namely the flight speed (v) of the unmanned aerial vehiclex,vy) V isxAnd vyAngular velocity of three axes of unmanned aerial vehicle vibration degreeDegree average rate of change, and camera shutter time. In the output shape, 1 represents the predicted future frame image jelly effect index value.
c) And the TCN time sequence prediction network extracts the characteristics of the training data to obtain a characteristic matrix of the training data, inputs the characteristic matrix into the full-connection layer to obtain final output, and outputs the final output as a predicted jelly effect index value of the future frame image.
d) The loss function is a cross-entropy loss function.
And the timing prediction network training is finished, and the flying speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU are input into the timing prediction network, so that the jelly effect index of the future frame image can be predicted.
And step S002, taking the area ratio of the moving pixel points and the static pixel points in each frame of image as dynamic characteristics, and comparing the dynamic characteristics with a preset threshold value to judge the weights of the static pixel points and the moving pixel points.
The jelly effect is generated by relative motion between the camera and the shot object, a reasonable adjusting mode can be selected by utilizing the dynamic characteristics of the image, and the parameters of the unmanned aerial vehicle can be timely and effectively adjusted.
And S003, determining an adjusting mode and adjusting related parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of each frame of image.
Wherein, if the jelly effect index exceeds the preset index threshold value W0In the time, the human eyes can see the jelly effect clearly, and at the moment, the adjustment mode is determined and the parameters of the unmanned aerial vehicle are adjusted in time to eliminate the jelly effect by combining the dynamic characteristics of the image.
Step S004, the selection method of the adjustment mode comprises the following steps: when the weight of the static pixel point is high, the pendant damping is adjusted to the target damping; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjusting network which takes a jelly effect index as input and takes the target damping as output.
When the dynamic characteristic Y is less than or equal to 0.5, namely the static pixel points in the image occupy the main position, the first adjustment mode is carried out: need not to change unmanned aerial vehicle's flying speed, directly change the pendant damping, avoid unmanned aerial vehicle's resonance and the vibrations of camera, eliminate the production of jelly effect.
In this embodiment, the specific implementation method of the first adjustment mode is as follows: and inputting a jelly effect index value into the trained pendant damping adjustment network, adjusting the pendant damping to the target damping output by the pendant damping adjustment network, and eliminating the jelly effect.
When the dynamic characteristic Y is larger than 0.5, namely the moving pixel points in the image are dominant, the second adjustment mode is carried out: adjust unmanned aerial vehicle's flying speed, guarantee that unmanned aerial vehicle and moving object's relative velocity is within the allowed band, readjust unmanned aerial vehicle's pendant damping, eliminate the jelly effect.
In this embodiment, the specific implementation method of the second adjustment mode is as follows: calculating the average motion speed of the motion pixel points in the image and recording the average motion speed asAdjust the flying speed of the unmanned aerial vehicle, the size of the adjustment isWhere v is the airspeed of the drone. The result of the adjustment is to ensure that the relative motion between the flying speed of the unmanned aerial vehicle and the motion pixel points in the image is less than VbIn which V isbA manually set threshold. And then, the pendant damping is adjusted to the target damping output by the pendant damping adjusting network, and the jelly effect is eliminated.
Further, the step of analyzing the optical flow information of the static pixel points in each frame of image to obtain the index of the jelly effect comprises:
step S101, obtaining an optical flow information sequence of each frame of image according to the optical flow information of each line of static pixel points in each frame of image.
And classifying the static pixel points according to the line numbers according to the obtained static pixel points in each frame of image and the corresponding optical flow information to obtain the static pixel points in each line.
It should be noted that the rolling shutter camera collects the pixels row by row, so that the optical flow information of the stationary pixels in each row is consistent. In this embodiment, the average value of the optical flow information of each row of static pixels is used as the optical flow information of the current row.
The average formula for calculating the optical flow information of the first row of static pixel points is as follows:
wherein n is the number of the static pixel points in the first row,for the optical flow information of the ith stationary pixel of the first line, Q1And (x, y) the optical flow information of the first row of static pixel points in two directions of horizontal and vertical coordinates is included.
Obtaining the optical flow information of each line of static pixel points in the same frame image according to the same method to obtain an optical flow information sequence { Q1,Q2,……QNWhere N represents the total number of lines in the image.
And step S102, calculating the difference of the optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence.
Wherein, the value in the optical flow information sequence is regarded as a plurality of coordinate points in a coordinate system, the difference of the optical flow information between adjacent lines is calculated, and the obtained optical flow information change sequence is recorded as { Δ Q2,ΔQ3,……ΔQN-1}。
And step S103, determining a jelly effect index according to the mean value and the variance of the optical flow information change sequence.
Wherein, calculating the mean value of the optical flow information change sequence and recording asCalculating the variance of the optical flow information change sequence, and recording the variance as SΔQ. Fruit constructionThe calculation formula of the freezing effect index W is as follows:
it should be noted that the mean value reflects the degree of change of the optical flow information in the same frame of picture,larger indicates more severe jelly effect of the image; the variance reflects the deviation degree between the optical flow information change value of each pixel point and the mean value, and the larger the deviation degree is, the more serious the fruit jelly effect of the image is; the value range of the jelly effect index is [0,1 ]]The larger the value, the more serious the jelly effect.
It should be noted that the beneficial effects of selecting the static pixel point to obtain the jelly effect index are as follows: firstly, detection errors caused by the fact that jelly effect indexes of images are influenced by the jelly effect generated by self motion of motion pixels can be eliminated; secondly, the generalization of the system is increased, the characteristics of objects such as buildings, stones and the like are not specified, and static pixel points are directly selected, so that the detection error caused by the fact that no specified object exists in the image is avoided.
Further, the step of taking the ratio of the area of the moving pixel point to the area of the static pixel point in each frame of image as the dynamic characteristic comprises the following steps:
step S201, connected domain analysis is carried out on the binary image of the image, and the area of the static pixel point is obtained.
Wherein, a binary image I of the image is processed1Performing connected domain analysis to obtain the area of a white region, wherein the area is the area of a static pixel point and is marked as s1。
Step S202, transposing the binary image of the image, and performing connected domain analysis to obtain the area of the moving pixel point.
Wherein, a binary image I of the image is processed1Performing transposition operation to convert white pixels into black pixels, performing connected domain processing to obtain the area of the moving pixel point, and recording as s2。
Step S203, dynamic characteristics are constructed according to the areas of the moving pixel points and the areas of the static pixel points in the image.
Wherein, a calculation formula of the dynamic characteristics of the image is constructed:
in the formula, s1Is the area of the stationary pixel, s2Is the area of the dynamic pixel.
When the dynamic characteristic Y is more than 0.5, the moving pixel points in the image are considered to be dominant;
when the dynamic characteristic Y is less than or equal to 0.5, the static pixel points in the image are considered to be in the main position.
Further, the training step of the pendant damping adjustment network comprises the following steps:
the pendant damping is used as a label, IMU reading and jelly effect indexes of the unmanned aerial vehicle are used as a training set, a pendant damping adjusting network is trained, a mean square error loss function is adopted as a loss function, iteration is carried out continuously, and model parameters are updated.
The pendant damping adjusting network is composed of an input layer, two hidden layers and an output layer. In the unmanned aerial vehicle aerial photography process, the IMU reading of the unmanned aerial vehicle, the pendant damping and the jelly effect index of the image are collected, and the jelly effect index in the collected data is larger than a preset index threshold value W0The data after the temporal data removal is the final sample data.
Based on the same inventive concept as the method embodiment, the embodiment of the invention also provides a jelly effect prevention system based on artificial intelligence in an aerial photography scene. The system includes a timing prediction network unit 100, a dynamic characteristics construction unit 200, an adjustment mode determination unit 300, and an adjustment mode operation unit 400.
Specifically, the timing prediction network unit 100 is configured to input the flight speed of the unmanned aerial vehicle, the shutter time of the camera, and the reading of the IMU into the timing prediction network to predict the jelly effect index of the future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle;
a dynamic feature constructing unit 200, configured to compare the area of the moving pixel point and the area of the static pixel point in each frame of image with a preset threshold value as a dynamic feature, and determine weights of the static pixel point and the moving pixel point according to the comparison between the dynamic feature and the preset threshold value;
the adjustment mode determining unit 300 is configured to determine an adjustment mode to adjust relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the timing prediction network and the dynamic characteristics of each frame of image;
an adjustment mode operation unit 400 configured to adjust the pendant damping to a target damping when the weight of the stationary pixel is high; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjusting network which takes a jelly effect index as input and takes the target damping as output.
Further, the timing prediction network prediction unit includes:
the optical flow information sequence acquisition module is used for acquiring an optical flow information sequence of each frame of image according to the optical flow information of each row of static pixel points in each frame of image;
the optical flow information change sequence acquisition module is used for calculating the difference of optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence;
and the jelly effect index determining module is used for determining a jelly effect index according to the mean value and the variance of the optical flow information change sequence.
Further, the dynamic feature construction unit includes:
the static pixel point area obtaining module is used for carrying out connected domain analysis on the binary image of the image to obtain the area of the static pixel point;
the device comprises a motion pixel point area obtaining module, a motion pixel point area obtaining module and a motion pixel point area obtaining module, wherein the motion pixel point area obtaining module is used for transposing a binary image of an image and carrying out connected domain analysis to obtain the area of a motion pixel point;
and the dynamic characteristic construction module is used for constructing dynamic characteristics according to the areas of the moving pixel points and the static pixel points in the image.
Further, the adjustment mode operation unit includes:
and the pendant damping adjustment network training module is used for training a pendant damping adjustment network by taking the pendant damping as a label and taking IMU (inertial measurement unit) reading and jelly effect indexes of the unmanned aerial vehicle as a training set, wherein the loss function adopts a mean square error loss function, and model parameters are updated through continuous iteration.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A jelly effect prevention method in an aerial photography scene based on artificial intelligence is characterized by comprising the following steps:
inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into a time sequence prediction network to predict the jelly effect index of a future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle;
taking the area ratio of the moving pixel points and the static pixel points in each frame of image as dynamic characteristics, comparing the dynamic characteristics with a preset threshold value, and judging the weights of the static pixel points and the moving pixel points;
determining an adjusting mode and adjusting relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of each frame of image;
the selection method of the adjustment mode comprises the following steps: when the weight of the static pixel point is high, adjusting the pendant damping to a target damping; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjustment network which takes the jelly effect index as input and the target damping as output.
2. The method for preventing jelly effect in the aerial photography scene based on artificial intelligence as claimed in claim 1, wherein the step of analyzing the optical flow information of the static pixel points in each frame of image to obtain the jelly effect index comprises:
acquiring an optical flow information sequence of each frame of image according to the optical flow information of each row of static pixel points in each frame of image;
calculating the difference of the optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence;
and determining the jelly effect index according to the mean value and the variance of the optical flow information change sequence.
3. The method for preventing jelly effect in the aerial photography scene based on artificial intelligence of claim 1, wherein the step of taking the ratio of the area of the moving pixel point to the area of the static pixel point in each frame of image as the dynamic feature comprises:
performing connected domain analysis on the binary image of the image to obtain the area of the static pixel point;
transposing the binary image of the image, and performing connected domain analysis to obtain the area of the moving pixel point;
and constructing dynamic characteristics according to the areas of the moving pixel points and the static pixel points in the image.
4. The method for preventing jelly effect in the aerial photography scene based on artificial intelligence of claim 1, wherein the training step of the pendant damping adjustment network comprises:
and training a pendant damping adjusting network by taking the pendant damping as a label and taking IMU reading of the unmanned aerial vehicle and the jelly effect index as a training set, wherein a mean square error loss function is adopted as a loss function, and model parameters are updated by continuously iterating.
5. A jelly effect prevention system under an aerial photography scene based on artificial intelligence is characterized by comprising the following units:
the timing sequence prediction network unit is used for inputting the flight speed of the unmanned aerial vehicle, the shutter time of the camera and the reading of the IMU into the timing sequence prediction network to predict the jelly effect index of the future frame image; the label of the time sequence prediction network is a jelly effect index obtained by analyzing the optical flow information of static pixel points in each frame of image, and the static pixel points are obtained according to the optical flow information of the image and the flight speed of the unmanned aerial vehicle;
the dynamic feature construction unit is used for taking the ratio of the area of a moving pixel point to the area of a static pixel point in each frame of image as a dynamic feature, comparing the dynamic feature with a preset threshold value and judging the weight of the static pixel point and the moving pixel point;
the adjustment mode determining unit is used for determining an adjustment mode and adjusting relevant parameters of the unmanned aerial vehicle in time according to the jelly effect index obtained by the time sequence prediction network and the dynamic characteristics of each frame of image;
an adjustment mode operation unit, configured to adjust a pendant damping to a target damping when the weight of the stationary pixel point is high; when the weight of the motion pixel point is high, calculating the average motion speed of the motion pixel point, adjusting the flight speed of the unmanned aerial vehicle according to the average motion speed, and adjusting the pendant damping to the target damping; the target damping is obtained according to a pendant damping adjustment network which takes the jelly effect index as input and the target damping as output.
6. The system of claim 5, wherein the time-series prediction network prediction unit comprises:
the optical flow information sequence acquisition module is used for acquiring an optical flow information sequence of each frame of image according to the optical flow information of each row of static pixel points in each frame of image;
the optical flow information change sequence acquisition module is used for calculating the difference of optical flow information between adjacent lines according to the optical flow information sequence to obtain an optical flow information change sequence;
and the jelly effect index determining module is used for determining the jelly effect index according to the mean value and the variance of the optical flow information change sequence.
7. The system for preventing jelly effect in an aerial photography scene based on artificial intelligence of claim 5, wherein the dynamic feature constructing unit comprises:
the static pixel point area obtaining module is used for carrying out connected domain analysis on the binary image of the image to obtain the area of the static pixel point;
a moving pixel point area obtaining module, configured to transpose a binary image of the image, and perform connected domain analysis to obtain an area of the moving pixel point;
and the dynamic feature construction module is used for constructing dynamic features according to the areas of the moving pixel points and the static pixel points in the image.
8. The system for preventing jelly effect in an artificial intelligence based aerial photography scene according to claim 5, wherein the adjustment mode operation unit comprises:
and the pendant damping adjustment network training module is used for training a pendant damping adjustment network by taking the pendant damping as a label and taking the IMU reading of the unmanned aerial vehicle and the jelly effect index as a training set, wherein the loss function adopts a mean square error loss function, and model parameters are updated through continuous iteration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110355358.6A CN112926530A (en) | 2021-04-01 | 2021-04-01 | Jelly effect prevention method and system in aerial photography scene based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110355358.6A CN112926530A (en) | 2021-04-01 | 2021-04-01 | Jelly effect prevention method and system in aerial photography scene based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112926530A true CN112926530A (en) | 2021-06-08 |
Family
ID=76173727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110355358.6A Withdrawn CN112926530A (en) | 2021-04-01 | 2021-04-01 | Jelly effect prevention method and system in aerial photography scene based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112926530A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284134A (en) * | 2021-06-17 | 2021-08-20 | 张清坡 | Unmanned aerial vehicle flight platform for geological survey |
CN115063380A (en) * | 2022-06-29 | 2022-09-16 | 东集技术股份有限公司 | Industrial camera parameter selection method and device, storage medium and computer equipment |
WO2023185584A1 (en) * | 2022-04-02 | 2023-10-05 | 深圳市道通智能航空技术股份有限公司 | Flight control method, unmanned aerial vehicle and readable storage medium |
-
2021
- 2021-04-01 CN CN202110355358.6A patent/CN112926530A/en not_active Withdrawn
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284134A (en) * | 2021-06-17 | 2021-08-20 | 张清坡 | Unmanned aerial vehicle flight platform for geological survey |
CN113284134B (en) * | 2021-06-17 | 2023-09-26 | 张清坡 | Unmanned aerial vehicle flight platform for geological survey |
WO2023185584A1 (en) * | 2022-04-02 | 2023-10-05 | 深圳市道通智能航空技术股份有限公司 | Flight control method, unmanned aerial vehicle and readable storage medium |
CN115063380A (en) * | 2022-06-29 | 2022-09-16 | 东集技术股份有限公司 | Industrial camera parameter selection method and device, storage medium and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112926530A (en) | Jelly effect prevention method and system in aerial photography scene based on artificial intelligence | |
US9299011B2 (en) | Signal processing apparatus, signal processing method, output apparatus, output method, and program for learning and restoring signals with sparse coefficients | |
CN111028222B (en) | Video detection method and device, computer storage medium and related equipment | |
CN111835983B (en) | Multi-exposure-image high-dynamic-range imaging method and system based on generation countermeasure network | |
CN106993188B (en) | A kind of HEVC compaction coding method based on plurality of human faces saliency | |
CN112802054B (en) | Mixed Gaussian model foreground detection method based on fusion image segmentation | |
CN111062410B (en) | Star information bridge weather prediction method based on deep learning | |
CN110866472A (en) | Unmanned aerial vehicle ground moving target identification and image enhancement system and method | |
CN111861949B (en) | Multi-exposure image fusion method and system based on generation countermeasure network | |
CN111598775B (en) | Light field video time domain super-resolution reconstruction method based on LSTM network | |
CN112767371A (en) | Method and system for adjusting jelly effect through variable damping based on artificial intelligence | |
CN113674231B (en) | Method and system for detecting iron scale in rolling process based on image enhancement | |
CN112712045A (en) | Unmanned aerial vehicle jelly effect severity detection method and system based on artificial intelligence | |
CN113344796A (en) | Image processing method, device, equipment and storage medium | |
CN117671014A (en) | Mechanical arm positioning grabbing method and system based on image processing | |
CN112396016B (en) | Face recognition system based on big data technology | |
US11928799B2 (en) | Electronic device and controlling method of electronic device | |
CN114821154A (en) | Grain depot ventilation window state detection algorithm based on deep learning | |
JP2018055287A (en) | Integration device and program | |
CN113392723A (en) | Unmanned aerial vehicle forced landing area screening method, device and equipment based on artificial intelligence | |
CN109905565A (en) | Video stabilization method based on motor pattern separation | |
CN116309213A (en) | High-real-time multi-source image fusion method based on generation countermeasure network | |
CN113112520A (en) | Unmanned aerial vehicle turning jelly effect processing method and system based on artificial intelligence | |
CN106981201A (en) | vehicle identification method under complex environment | |
CN112465736A (en) | Infrared video image enhancement method for port ship monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210608 |
|
WW01 | Invention patent application withdrawn after publication |