CN115866240A - Video stability determination method and device - Google Patents

Video stability determination method and device Download PDF

Info

Publication number
CN115866240A
CN115866240A CN202211470833.5A CN202211470833A CN115866240A CN 115866240 A CN115866240 A CN 115866240A CN 202211470833 A CN202211470833 A CN 202211470833A CN 115866240 A CN115866240 A CN 115866240A
Authority
CN
China
Prior art keywords
characteristic curve
target
video
motion
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211470833.5A
Other languages
Chinese (zh)
Inventor
周赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211470833.5A priority Critical patent/CN115866240A/en
Publication of CN115866240A publication Critical patent/CN115866240A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for determining video stability, and belongs to the technical field of video processing. The video stability determination method comprises the following steps: acquiring optical flow characteristic points of a target video; determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video; determining a characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve, wherein the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve; and determining the stability of the target video according to the characteristic parameter values.

Description

Video stability determination method and device
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a method and a device for determining video stability.
Background
At present, as mobile phones are increasingly widely used as camera devices, the functions of camera systems inside the mobile phones are also more complete. For example, many mobile phones have an anti-shake function when capturing video images for better image capturing effect. On the basis, the quality evaluation of the video anti-shake effect plays an important role in the research of the camera shooting function of the mobile phone.
However, in the current quality evaluation method for video anti-shake effect, that is, the current determination scheme for video stability or the simulated shake scene is limited, and cannot cover the real use scene of the user, so that the evaluation result is relatively one-sided, or requires a large amount of manpower, and is easily affected by human factors while the evaluation efficiency is relatively low, and the final evaluation result is not accurate enough.
Disclosure of Invention
The embodiment of the application aims to provide a video stability determination method and a video stability determination device, which can improve the evaluation efficiency of evaluating video stability and the accuracy of an evaluation result.
In a first aspect, an embodiment of the present application provides a method for determining video stability, where the method includes: acquiring optical flow characteristic points of a target video; determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video; determining a characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve, wherein the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve; and determining the stability of the target video according to the characteristic parameter values.
In a second aspect, an embodiment of the present application provides an apparatus for determining video stability, where the apparatus includes: an acquisition unit configured to acquire optical flow feature points of a target video; the first processing unit is used for determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video; the second processing unit is used for determining a characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve, wherein the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve; and the third processing unit is used for determining the stability of the target video according to the characteristic parameter values.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the video stability determination method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video stability determination method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video stability determination method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, the program product being executed by at least one processor to implement the steps of the video stability determination method according to the first aspect.
In the video stability determination method provided by the embodiment of the application, optical flow feature points of a target video are acquired, and a motion feature curve of the target video is determined according to pixel positions of the optical flow feature points in each frame of image of the target video. On the basis, according to the target model and the motion characteristic curve, the characteristic parameter value of the motion characteristic curve is determined, and then the stability of the target video is determined according to the characteristic parameter value. Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve.
According to the video stability determination method, when the stability of the target video is evaluated, the motion path of the target video is determined based on the pixel position of the optical flow characteristic point in the target video in each frame image of the target video to obtain the motion characteristic curve of the target video, and then the characteristic parameter value used for indicating the fluctuation degree of the motion characteristic curve is determined according to the motion characteristic curve of the target video and the target model, and the stability of the target video is evaluated according to the characteristic parameter value. Therefore, the stability of the video can be automatically analyzed based on the motion characteristic information of the video, namely the characteristic parameter value capable of representing the motion fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in batches is met.
Drawings
Fig. 1 is a schematic flowchart of a video stability determination method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video stability determination method according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of a video stability determination method according to an embodiment of the present application;
fig. 4 is a third schematic diagram of a video stability determination method according to an embodiment of the present application;
fig. 5 is a block diagram illustrating a structure of a video stability determining apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in other sequences than those illustrated or otherwise described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense to distinguish one object from another, and not necessarily to limit the number of objects, e.g., the first object may be one or more. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
An embodiment of the first aspect of the present application provides a method for determining video stability, where an execution subject of a technical scheme of the method for determining video stability provided by the embodiment of the present application may be a video stability determination device, and specifically may be determined according to an actual use requirement, and the embodiment of the present application is not limited. In order to more clearly describe the video stability determination method provided in the embodiment of the present application, the following method embodiment exemplarily illustrates an execution subject of the video stability determination method as a video stability determination apparatus.
The video stability determining method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a video stability determination method, which may include the following S102 to S108:
s102: and acquiring optical flow characteristic points of the target video.
It can be understood that when a video is shot, an object in a three-dimensional space becomes one or more determined points in a two-dimensional space, the video is obtained by continuously shifting countless points in a shot picture, and the shaking degree of the video affects the intensity of the shifting of each point in the video.
The optical flow feature points are corresponding pixel points of a relatively static object in the target video in the video, and the overall motion features of the target video can be determined by analyzing the motion features of the optical flow feature points in the target video. That is, the change in the displacement of the optical flow feature point in the target video can indicate the degree of blur of the target video.
In an actual application process, optical flow feature points in the target video can be specifically extracted through an LK (Lucas-Kanade) optical flow algorithm, so that the overall motion of the target video is determined according to the motion features of the optical flow feature points.
S104: and determining a motion characteristic curve of the target video according to the pixel positions of the optical flow characteristic points in each frame of image of the target video.
It is understood that, according to the pixel position of the optical flow feature point in each frame image of the target video, the overall motion feature of the optical flow feature point in the target video can be determined. Further, the instantaneous rate of change of the gray level of a pixel point on a two-dimensional image is defined as an optical flow vector, and when the time interval is small, such as between two consecutive frames of images of the target video, the optical flow vector of an optical flow feature point between the two consecutive frames of images is equivalent to the displacement amount of the optical flow feature point between the two consecutive frames of images. For example, when the optical flow vector of an optical flow feature point between two consecutive images is (u, v), u represents the displacement amount of the optical flow feature point in the x direction and also represents the instantaneous change amount of the displacement of the optical flow feature point in the x direction, and v represents the displacement amount of the optical flow feature point in the y direction and also represents the instantaneous change amount of the displacement of the optical flow feature point in the y direction.
Therefore, in the video stability determination method provided in the embodiment of the present application, the optical flow information of the optical flow feature points, that is, the optical flow vectors of the optical flow feature points between every two consecutive frames of images, can be determined through the target optical flow algorithm, and then the motion feature curve of the entire target video is determined according to the optical flow information of the optical flow feature points.
Specifically, in an actual application process, the target optical flow algorithm may specifically adopt an LK optical flow algorithm, and the optical flow information of the optical flow feature points is determined by the LK optical flow algorithm. The LK optical flow algorithm is a method for calculating motion information of an object between adjacent frame images by using the change of pixels in an image sequence in a time domain and the correlation between adjacent frames to find the correspondence between the previous frame image and the current frame image.
The LK optical flow algorithm has the following three assumptions:
gray-invariant assumption: i.e. a certain point of the real world, reflects to the pixel level, the grey scale of which is constant.
Perturbation invariant assumption: i.e. a small perturbation in time does not cause a drastic change in the pixels.
The assumption of spatial congruency is: adjacent points on the same surface have similar motion and they are closer at the pixel level.
On the basis of the above, when calculating the optical flow vector (u, v) of the optical flow feature points between every two consecutive frame images, the gray value or the brightness value of the optical flow feature points in the two consecutive frame images is equal based on the gray-scale invariant assumption of the LK optical flow algorithm. When the pixel position of the optical flow feature point in the current frame image is (x, y), and the pixel position of the optical flow feature point in the next frame image is (x + δ x, y + δ y), the following formula (1) can be obtained:
I(x+δx,y+δy,t+δt)=I(x,y,t),(1)
where x δ = u δ t, δ y = v δ t, δ t represents a small time interval, I (x, y, t) represents the gray-scale value of the light stream characteristic point at (x, y) at time t, I (x + δ x, y + δ y, t + δ t) represents the gray-scale value of the light stream characteristic point at (x + δ x, y + δ y) at time (t + δ t).
On the basis, the left side of the formula (1) is expanded by a Taylor series, and the following formula (2) can be obtained:
Figure BDA0003958459020000051
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003958459020000061
is a partial derivative of I (x, y, t) over x, based on the sum of the values of the coefficients>
Figure BDA0003958459020000062
Is a partial derivative of I (x, y, t) over y, is based on>
Figure BDA0003958459020000063
The partial derivatives of I (x, y, t) to t, e is the second and higher order terms of Taylor expansion of I (x + δ x, y + δ y, t + δ t), and e is a small value and may be omitted.
On the basis, the same terms on the left and right sides of the formula (2) are eliminated, e is omitted, and the left and right sides of the formula (2) are divided by δ t respectively to obtain the following formula (3):
Figure BDA0003958459020000064
further, let u (x, y) = δ x/δ t, v (x, y) = δ y/δ t, the above equation (3) can be converted into the following equation (4):
u(x,y)I x +v(x,y)I y +I t =0,(4)
wherein, I x 、I y 、I t For the difference of the current frame image in x, y, t directions, i.e. I x 、I y 、I t Is the partial derivative of I (x, y, t) with respect to x, y, t, respectively.
On this basis, the above equation (4) can be converted into the following equation (5) based on the gradient method:
I x V x +I y V y =-I t ,(5)
wherein, V x Equivalent to the above u, V y Equivalent to the above v, that is, the above formula (5) is equivalent to the following formula (6):
I x u+I y v=-I t ,(6)
wherein, (u, v) is an optical flow vector of the optical flow feature point between every two continuous frame images.
On the basis, based on the assumption of spatial consistency of the LK algorithm, it can be known that adjacent pixels in the image have similar motions, i.e., pixels in one neighborhood in the image all satisfy the above formula (6). That is, the pixel points around the optical-flow feature point (x, y) are consistent with the motion of the optical-flow feature point (x, y), i.e., they have the same u and v. At this time, assuming that there are n pixel points around the optical flow feature point (x, y), the following n equations can be obtained:
I x1 u+I y1 v=-I t1
I x2 u+I y2 v=-I t2
I xn u+I yn v=-I tn
further, writing the n equations in a matrix form, the following equation (7) can be obtained:
Figure BDA0003958459020000071
by simplifying the above equation (7), the following equation (8) can be obtained:
I xy V=I t ,(8)
wherein, V is the optical flow information to be obtained. On the basis, the two sides of the formula (8) are simultaneously multiplied by the matrix I xy Is transposed matrix of
Figure BDA0003958459020000077
The following formula (9) is obtained: />
Figure BDA0003958459020000072
Further, the left multiplication matrix is simultaneously performed on both sides of the above formula (9)
Figure BDA0003958459020000073
Is inverted matrix->
Figure BDA0003958459020000074
The following formula (10) can be obtained:
Figure BDA0003958459020000075
further, the following equation (11) can be obtained by expanding the above equation (10), and the numerical values of u and v, that is, the optical flow vectors of the optical flow feature points can be obtained:
Figure BDA0003958459020000076
on the basis, it can be understood that the optical-flow vector (u, v) of the optical-flow feature point represents the displacement change rate of the optical-flow feature point between two continuous frames of images, and the optical-flow vector (u, v) of the optical-flow feature point can only represent the motion change rule, i.e. the motion feature, of the optical-flow feature point. After the optical flow information of the optical flow feature points is obtained, the optical flow feature points need to be spatially converted, so as to determine the motion features of the entire target video according to the motion features of the optical flow feature points, that is, determine the motion feature curve of the target video.
In the practical application process, the commonly used mathematical models for describing the inter-frame relative motion of the video image sequence mainly include a translation model, a similarity transformation model and an affine transformation model. In the method for determining video stability provided in the embodiment of the present application, after obtaining optical flow information of optical flow feature points, that is, after determining motion features of the optical flow feature points, motion features of the entire target video are analyzed according to a target transformation model and the motion features of the optical flow feature points, so as to obtain a motion feature curve representing a motion path of the target video, so as to analyze a jitter degree of the target video according to the motion feature curve in the subsequent step.
In addition, in an actual application process, after obtaining the motion characteristic curve of the motion path of the target video, the motion characteristic curve may also be decomposed to obtain a motion characteristic curve component of the target video in a first direction, for example, an x direction, and obtain a motion characteristic curve component of the target video in a second direction, for example, a y direction. For example, as shown in fig. 2, (a) in fig. 2 shows a motion characteristic curve component of a target video in the x direction, wherein an ordinate x represents a displacement of the target video in the x direction; fig. 2 (b) shows the motion characteristic curve component of the target video in the y direction, where the ordinate y represents the displacement of the target video in the y direction.
S106: and determining the characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve.
Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve.
Further, the target model is used for analyzing and processing a motion characteristic curve of the target video, so as to determine a characteristic parameter value of the motion characteristic curve. That is, when the stability of the target video is analyzed, the motion characteristic curve of the target video is an input object of the target model, and the characteristic parameter value of the motion characteristic curve is an output object of the target model.
Specifically, after the optical flow information of the optical flow feature points of the target video is analyzed to obtain a motion feature curve of the target video, the motion feature curve is input into the target model, and the motion feature curve of the target video is analyzed and processed through the target model to output a feature parameter value indicating the fluctuation degree of the motion feature curve, so that the stability of the target video is evaluated according to the feature parameter value.
S108: and determining the stability of the target video according to the characteristic parameter values.
Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve.
It can be understood that the motion characteristic curve represents the motion change rule of the whole target video, and the intensity of the fluctuation degree of the motion characteristic curve can represent the jitter degree of the target video.
Therefore, in the video stability determining method provided in the embodiment of the present application, after the characteristic parameter value of the motion characteristic curve of the target video is obtained, the jitter degree of the target video may be analyzed by analyzing the characteristic parameter value, so as to determine the stability degree of the target video.
By the video stability determination method provided by the embodiment of the application, the optical flow characteristic points of the target video are obtained, and the motion characteristic curve of the target video is determined according to the pixel positions of the optical flow characteristic points in each frame of image of the target video. On the basis, according to the target model and the motion characteristic curve, the characteristic parameter value of the motion characteristic curve is determined, and then the stability of the target video is determined according to the characteristic parameter value. Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve. That is, according to the video stability determining method, when the stability of the target video is evaluated, the motion path of the target video is determined based on the pixel position of the optical flow feature point in the target video in each frame image of the target video, so as to obtain the motion feature curve of the target video, and then the feature parameter value for indicating the fluctuation degree of the motion feature curve is determined according to the motion feature curve of the target video and the target model, and the stability of the target video is evaluated according to the feature parameter value. Therefore, the stability of the video can be automatically analyzed based on the motion characteristic information of the video, namely the characteristic parameter value capable of representing the motion fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in batches is met.
In this embodiment of the application, after the step S104, the video stability determination method further includes the following step S110:
s110: and smoothing the motion characteristic curve to remove noise information in the motion characteristic curve.
Specifically, in the video stability determining method provided in the embodiment of the present application, after the motion characteristic curve of the target video is obtained, the motion characteristic curve is further subjected to smoothing processing to filter part of the motion noise in the motion characteristic curve, so that in the subsequent process of evaluating the stability of the target video according to the motion characteristic curve, the noise information in the motion characteristic curve is prevented from interfering with the evaluation result, and the accuracy of evaluating the stability of the target video is ensured.
The motion characteristic curve represents the motion change rule of the target video among the multi-frame images. Therefore, in the actual application process, the motion characteristic curve can be smoothed by weighting the curve coordinate values of the motion characteristic curve corresponding to each two consecutive frames of images, so as to remove the noise information in the motion characteristic curve.
In addition, the motion characteristic curve may include a first characteristic curve component of the target video in a first direction, such as an x direction, and a second characteristic curve component of the target video in a second direction, such as a y direction. The first characteristic curve component represents the motion change rule of the target video among the multi-frame images of the target video in the first direction, and the second characteristic curve component represents the motion change rule of the target video among the multi-frame images of the target video in the second direction.
In an actual application process, the first characteristic curve component and the second characteristic curve component may be respectively smoothed, so as to further evaluate accuracy of the target video stability.
Illustratively, a second characteristic curve component of the target video in a second direction, for example, the y direction, is as shown in (b) in fig. 2, and the second characteristic curve component is smoothed to remove noise information in the second characteristic curve component, so as to obtain a smoothed second characteristic curve component as shown in fig. 3.
According to the embodiment provided by the application, after the motion characteristic curve of the target video is determined, the motion characteristic curve is subjected to smoothing processing so as to remove noise information in the motion characteristic curve. Therefore, in the subsequent process of evaluating the stability of the target video according to the motion characteristic curve, the noise information in the motion characteristic curve is prevented from interfering the evaluation result, and the accuracy of evaluating the stability of the target video is ensured.
In an embodiment of the present invention, the S110 may specifically include the following S110a to S110c:
s110a: and acquiring the characteristic coordinate value corresponding to each frame of image in the motion characteristic curve.
The motion characteristic curve represents the motion change rule of the target video among the multi-frame images.
Further, the feature coordinate values are curve coordinate values (x, y) of the motion feature curve corresponding to each frame of image, where x is a coordinate component of the motion feature curve corresponding to each frame of image in a first direction, such as the x direction, and y is a coordinate component of the motion feature curve corresponding to each frame of image in a second direction, such as the y direction.
S110b: and carrying out weighting processing on the characteristic coordinate values corresponding to the two adjacent frames of images to obtain a target coordinate value.
Specifically, after the feature coordinate values corresponding to each frame of image in the motion feature curve are obtained, the feature coordinate values corresponding to each two adjacent frames of images are weighted to obtain the target coordinate values corresponding to each two adjacent frames of images.
In an actual application process, for each two adjacent frames of images, the target coordinate values of the two adjacent frames of images in the first direction and the second direction can be respectively determined by the following formula (12) and formula (13):
x=0.05x 1 +0.95x 2 ,(12)
y=0.05y 1 +0.95y 2 ,(13)
wherein x is the target coordinate value of two adjacent frames of images in a first direction, such as the x direction, x 1 The coordinate component, x, corresponding to the previous image in the first direction, such as the x direction, is obtained from two adjacent images 2 The coordinate component of the next frame image in the two adjacent frames of images in the first direction, such as the x direction, y is the target coordinate value of the two adjacent frames of images in the second direction, such as the y direction 1 For the coordinate component, y, corresponding to the previous image in the second direction, e.g. y direction 2 The coordinate component of the next frame image in the two adjacent frame images in the second direction, for example, the y direction.
S110c: and taking the target coordinate value as a characteristic coordinate value corresponding to the next frame of image in the two adjacent frames of images.
Specifically, after the target coordinate value of each two adjacent frame images is determined, for each two adjacent frame images, the obtained target coordinate value is used as the feature coordinate value corresponding to the next frame image in the two adjacent frame images, so as to implement the smoothing processing on the motion feature curve, thereby filtering the noise information in the motion feature curve.
In the embodiment provided by the application, the feature coordinate value corresponding to each frame of image in the motion feature curve is obtained, then the feature coordinate values corresponding to two adjacent frames of images are weighted to obtain the target coordinate value, and the target coordinate value is used as the feature coordinate value corresponding to the next frame of image in the two adjacent frames of images, so that the motion feature curve is smoothed. Therefore, the motion characteristic curve is smoothed in a manner of weighting the curve coordinate values of the motion characteristic curve corresponding to each two continuous frames of images, so that noise information in the motion characteristic curve is removed, the noise information in the motion characteristic curve is prevented from interfering the evaluation result of the stability of the target video, and the accuracy of evaluating the stability of the target video is ensured.
In this embodiment, the S104 may specifically include the following S104a and S104b:
s104a: and determining a target transformation matrix and a target offset between every two adjacent frames of images in the target video according to the pixel positions of the optical flow characteristic points in each frame of image of the target video.
It is understood that, between two consecutive images of the target video, the optical-flow vector of the optical-flow feature point between the two consecutive images is equivalent to the displacement amount of the optical-flow feature point between the two consecutive images. The optical flow vector of the optical flow characteristic point represents the displacement change rate of the optical flow characteristic point between two continuous frame images.
However, the optical-flow vector of an optical-flow feature point can only represent the motion change law, i.e., the motion feature, of the optical-flow feature point. Therefore, in the video stability determination method proposed in the embodiment of the present application, after the optical flow feature points in the target video are determined, according to the pixel positions of the optical flow feature points in each frame of image of the target video, the displacement instantaneous change rate of the optical flow feature points between each two adjacent frames of image is determined, that is, the optical flow vectors of the optical flow feature points between each two adjacent frames of image are determined.
On the basis, after the optical flow information of the optical flow characteristic points is obtained, namely after the motion characteristics of the optical flow characteristic points are determined, the motion characteristics of the whole target video are analyzed according to the target transformation model and the motion characteristics of the optical flow characteristic points, so that a target transformation matrix and a target offset relative to the target transformation model between every two adjacent frames of images in the target video are determined, and the motion path, namely a motion characteristic curve, of the target video is determined according to the determined target transformation matrix and the determined target offset.
S104b: and determining a motion characteristic curve according to the target transformation matrix and the target offset.
Specifically, in the video stability determination method provided in the embodiment of the present application, an affine transformation model may be specifically used as the target transformation model.
Affine transformation is also called affine mapping, and in geometry, one vector space is subjected to linear transformation and then to translation, and then is transformed into the other vector space. That is, the affine transformation is a composite of two functions: translation and linear mapping.
On the basis, the target transformation matrix is a linear mapping matrix in affine transformation, the target offset is a translation amount in affine transformation, and an affine mapping function of the optical flow feature points, namely a motion feature curve of the target video, can be obtained according to the determined linear mapping matrix and the translation amount.
For example, when the target transformation matrix is a and the target offset is b, the motion characteristic curve of the target video can be expressed by the following equation (14):
Figure BDA0003958459020000121
wherein the content of the first and second substances,
Figure BDA0003958459020000122
characterizing points of optical flow->
Figure BDA0003958459020000123
I.e. a motion characteristic curve representing the target video.
On the basis, by decomposing the above formula (14), the motion characteristic curve component of the target video in the first direction, such as the x direction, and the motion characteristic curve component of the target video in the second direction, such as the y direction, can be obtained.
According to the embodiment provided by the application, the target transformation matrix and the target offset between every two adjacent frames of images in the target video are determined according to the pixel position of the optical flow characteristic point in each frame of image of the target video, and then the motion characteristic curve is determined according to the determined target transformation matrix and the determined target offset. In this way, an affine mapping function of the optical flow feature points, that is, a motion feature curve of the target video, is determined based on the affine transformation model, so as to represent the motion features of the target video, so that the stability degree of the target video is analyzed by analyzing the motion features of the target video in the following.
In this embodiment of the application, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in the first direction, and the second characteristic curve is a characteristic curve component of the target video in the second direction, on this basis, the above S106 may specifically include the following S106a to S106c:
s106a: and acquiring a first angle value between three coordinate points corresponding to every three adjacent frames of images in the first characteristic curve through the target model.
The first characteristic curve is a characteristic curve component of the target video in a first direction, such as the x direction, and the first characteristic curve represents a motion change rule of the target video in the first direction, such as the x direction.
Further, each first angle value is used to indicate the degree of fluctuation between corresponding three coordinate points in the first characteristic curve, that is, the first angle value is used to indicate the law of motion change in the first direction between every three adjacent images.
Further, the characteristic parameter value may specifically be a characteristic angle value, where the characteristic angle value is used to indicate a fluctuation degree of the motion characteristic curve, that is, to indicate a jitter degree of the target video.
Specifically, after a first characteristic curve of the target video is obtained, the first characteristic curve is input into the target model, three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve are extracted through the target model, and a first angle value formed by every three coordinate points is determined, so that a characteristic angle value of the motion characteristic curve is determined according to the first angle value.
In addition, the first angle value is a complementary angle value of an internal angle formed between three coordinate points corresponding to adjacent images in every three frames in the first characteristic curve. The larger the supplementary angle value is, the steeper the first characteristic curve among the three coordinate points is, that is, the larger the jitter degree of three adjacent frames of images corresponding to the target video in the first direction is; the smaller the supplementary angle value is, the flatter the first characteristic curve between the three coordinate points is, that is, the smaller the degree of shake in the first direction of the three adjacent images corresponding to the target video is.
Illustratively, a first characteristic curve of the target video is as shown in fig. 4, after the first characteristic curve is input into the target model, the target model extracts three coordinate points B, C and D corresponding to three adjacent frames of images in the first characteristic curve, and determines an angle value of a complementary angle β of an internal angle α formed by the coordinate points B, C and D as a first angle value corresponding to the three adjacent frames of images.
S106b: and acquiring a second angle value between three coordinate points corresponding to every three adjacent frames of images in the second characteristic curve through the target model.
The second characteristic curve is a characteristic curve component of the target video in a second direction, such as the y direction, and the second characteristic curve represents a motion change rule of the target video in the second direction, such as the y direction.
Further, the second angle value is used to indicate the degree of fluctuation between corresponding three coordinate points in the second characteristic curve, that is, the second angle value is used to indicate the law of motion change between every three adjacent images in the second direction.
Specifically, after a second characteristic curve of the target video is obtained, the second characteristic curve is input into the target model, three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve are extracted through the target model, and a second angle value formed by every three coordinate points is determined, so that the characteristic angle value of the motion characteristic curve is determined according to the second angle value.
In addition, the second angle value is a complementary angle value of an internal angle formed between three coordinate points corresponding to adjacent images in every three frames in the second characteristic curve. The larger the supplementary angle value is, the steeper the second characteristic curve among the three coordinate points is, that is, the larger the jitter degree of the three adjacent frames of images corresponding to the target video in the second direction is; the smaller the supplementary angle value is, the flatter the second characteristic curve between the three coordinate points is, that is, the smaller the degree of shake in the second direction of the three adjacent images corresponding to the target video is.
S106c: and determining a characteristic parameter value according to the first angle value and the second angle value.
The first angle value is used for indicating a motion change rule between every three adjacent images in a first direction, and the second angle value is used for indicating a motion change rule between every three adjacent images in a second direction.
Specifically, after the target model determines a first angle value between three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve and a second angle value between three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve, the target model determines the characteristic angle value of the motion characteristic curve according to the determined first angle values and the second angle values.
In practical application, the following formula (15) may be set in the target model to determine the characteristic angle value of the motion characteristic curve through the formula (15):
Figure BDA0003958459020000141
wherein T is a characteristic angle value, n is the number of the first angle values, m is the number of the second angle values, and T xi Is the ith first angle value, T yi Is the ith second angle value.
In the above embodiments provided by the present application, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in the first direction, and the second characteristic curve is a characteristic curve component of the target video in the second direction. On the basis, the target audio is generated according to the first intermediate signal, the second intermediate signal and the third intermediate signal. Therefore, a first angle value between three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve is obtained through the target model, each first angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the first characteristic curve, a second angle value between the three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve is obtained through the target model, the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve, and the characteristic parameter value is determined according to the first angle value and the second angle value. Therefore, the fluctuation degree of the motion characteristic curve of the target video is analyzed based on the angle change rules of the motion path of the target video in the first direction and the second direction, namely the motion change rule of the target video is analyzed, and the accuracy of the stability analysis of the target video is ensured.
It should be noted that there is no clear execution sequence between S106a and S106b, S106b may be executed after S106a is executed, S106a may be executed after S106b is executed, or S106a and S106b may be executed simultaneously. The execution order of S106a and S106b is not particularly limited herein.
In this embodiment, the S108 may specifically include the following S108a and S108b:
s108a: the characteristic parameter value is compared with at least one target threshold value.
The characteristic parameter value may specifically be a characteristic angle value, and a unit of the characteristic angle value is an angle.
Further, the at least one target threshold is used for dividing the stability level of the target video.
In an actual application process, the at least one target threshold may specifically include a first threshold and a second threshold. The first threshold is smaller than the second threshold, a stability grade is divided in a range from zero to the first threshold, a stability grade is divided in a range from the first threshold to the second threshold, and a stability grade is divided in a range from the second threshold to infinity.
Specifically, after the characteristic angle value of the target video is determined, the characteristic angle value is compared with the first threshold and the second threshold to determine a numerical range in which the characteristic angle value is located, so that the stability level of the target video is determined, and the stability degree of the target video is determined.
S108b: and determining the stability degree of the target video according to the comparison result.
Wherein, the characteristic parameter value is inversely related to the stability degree, i.e. the larger the characteristic parameter value is, the worse the stability of the target video is.
Specifically, after the characteristic angle value of the target video is determined, in the case that the characteristic angle value is smaller than the first threshold, the stability of the target video is considered to be good, in the case that the characteristic angle value is greater than or equal to the first threshold and smaller than the second threshold, the stability of the target video is considered to be poor, and in the case that the characteristic angle value is greater than or equal to the second threshold, the stability of the target video is considered to be extremely poor.
In an actual application process, a person skilled in the art may set specific values of the first threshold and the second threshold according to actual situations, and the setting is not limited specifically herein.
In addition, in the actual application process, the stability levels of the target videos can be divided more finely by setting a plurality of target threshold values, so that the accuracy of the video stability determination result is improved.
In the foregoing embodiment provided by the present application, when determining the stability of the target video according to the characteristic parameter value, specifically, the characteristic parameter value is compared with at least one target threshold, and then the stability degree of the target video is determined according to the comparison result, where the characteristic parameter value is inversely related to the stability degree. Therefore, the stability of the video is automatically analyzed based on the characteristic parameter value capable of representing the fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in a large batch is met.
In the video stability determination method provided by the embodiment of the first aspect of the present application, the execution subject may be a video stability determination apparatus. The video stability determination apparatus provided in the second aspect of the present application is described in this embodiment by taking the video stability determination apparatus as an example to execute the video stability determination method.
As shown in fig. 5, an embodiment of the present application provides a video stability determination apparatus 500, which may include an acquisition unit 502, a first processing unit 504, a second processing unit 506, and a third processing unit 508, which are described below.
An acquisition unit 502 for acquiring optical flow feature points of a target video;
the first processing unit 504 is configured to determine a motion characteristic curve of the target video according to pixel positions of the optical flow characteristic points in each frame of image of the target video;
a second processing unit 506, configured to determine a characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve, where the characteristic parameter value is used to indicate a fluctuation degree of the motion characteristic curve;
a third processing unit 508, configured to determine the stability of the target video according to the characteristic parameter value.
By the video stability determination device provided by the embodiment of the application, the optical flow characteristic points of the target video are obtained, and the motion characteristic curve of the target video is determined according to the pixel positions of the optical flow characteristic points in each frame of image of the target video. On the basis, according to the target model and the motion characteristic curve, the characteristic parameter value of the motion characteristic curve is determined, and then the stability of the target video is determined according to the characteristic parameter value. Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve. That is, according to the video stability determining method, when the stability of the target video is evaluated, the motion path of the target video is determined based on the pixel position of the optical flow feature point in the target video in each frame image of the target video, so as to obtain the motion feature curve of the target video, and then the feature parameter value for indicating the fluctuation degree of the motion feature curve is determined according to the motion feature curve of the target video and the target model, and the stability of the target video is evaluated according to the feature parameter value. Therefore, the stability of the video can be automatically analyzed based on the motion characteristic information of the video, namely the characteristic parameter value capable of representing the motion fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in batches is met.
In this embodiment of the application, the video stability determining apparatus 500 further includes: the fourth processing unit 510 is configured to perform smoothing processing on the motion characteristic curve to remove noise information in the motion characteristic curve.
According to the embodiment provided by the application, after the motion characteristic curve of the target video is determined, the motion characteristic curve is subjected to smoothing processing so as to remove noise information in the motion characteristic curve. Therefore, in the subsequent process of evaluating the stability of the target video according to the motion characteristic curve, the noise information in the motion characteristic curve is prevented from interfering the evaluation result, and the accuracy of evaluating the stability of the target video is ensured.
In this embodiment of the application, the fourth processing unit 510 is specifically configured to: acquiring a characteristic coordinate value corresponding to each frame of image in the motion characteristic curve; weighting the feature coordinate values corresponding to two adjacent frames of images to obtain target coordinate values; and taking the target coordinate value as a characteristic coordinate value corresponding to the next frame of image in the two adjacent frames of images.
In the embodiment provided by the application, the feature coordinate value corresponding to each frame of image in the motion feature curve is obtained, and then the feature coordinate values corresponding to two adjacent frames of images are weighted to obtain the target coordinate value, and the target coordinate value is used as the feature coordinate value corresponding to the next frame of image in the two adjacent frames of images, so that the motion feature curve is smoothed. Therefore, the motion characteristic curve is smoothed in a manner of weighting the curve coordinate values of the motion characteristic curve corresponding to each two continuous frames of images, so that noise information in the motion characteristic curve is removed, the noise information in the motion characteristic curve is prevented from interfering the evaluation result of the stability of the target video, and the accuracy of evaluating the stability of the target video is ensured.
In this embodiment of the application, the first processing unit 504 is specifically configured to: determining a target transformation matrix and a target offset between every two adjacent frames of images in the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video; and determining a motion characteristic curve according to the target transformation matrix and the target offset.
According to the embodiment provided by the application, the target transformation matrix and the target offset between every two adjacent frames of images in the target video are determined according to the pixel position of the optical flow characteristic point in each frame of image of the target video, and then the motion characteristic curve is determined according to the determined target transformation matrix and the determined target offset. In this way, an affine mapping function of the optical flow feature points, that is, a motion feature curve of the target video, is determined based on the affine transformation model, so as to represent the motion features of the target video, so that the stability degree of the target video is analyzed by analyzing the motion features of the target video in the following.
In this embodiment of the application, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in a first direction, the second characteristic curve is a characteristic curve component of the target video in a second direction, and the second processing unit 506 is specifically configured to: acquiring a first angle value between three coordinate points corresponding to every three adjacent frames of images in the first characteristic curve through the target model, wherein the first angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the first characteristic curve; acquiring a second angle value between three coordinate points corresponding to every three adjacent frames of images in the second characteristic curve through the target model, wherein the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve; and determining a characteristic parameter value according to the first angle value and the second angle value.
In the above embodiments provided by the present application, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in the first direction, and the second characteristic curve is a characteristic curve component of the target video in the second direction. On the basis, the target audio is generated according to the first intermediate signal, the second intermediate signal and the third intermediate signal. Therefore, a first angle value between three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve is obtained through the target model, each first angle value is used for indicating the fluctuation degree between the three coordinate points corresponding to the first characteristic curve, a second angle value between the three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve is obtained through the target model, the second angle value is used for indicating the fluctuation degree between the three coordinate points corresponding to the second characteristic curve, and then the characteristic parameter value is determined according to the first angle value and the second angle value. Therefore, the fluctuation degree of the motion characteristic curve of the target video is analyzed based on the angle change rules of the motion path of the target video in the first direction and the second direction, namely the motion change rule of the target video is analyzed, and the accuracy of the stability analysis of the target video is ensured.
In this embodiment of the application, the third processing unit 508 is specifically configured to: comparing the characteristic parameter value with at least one target threshold value; determining the stability degree of the target video according to the comparison result; wherein the characteristic parameter value is inversely related to the degree of stability.
In the above embodiment provided by the present application, when determining the stability of the target video according to the characteristic parameter value, specifically, the characteristic parameter value is compared with at least one target threshold, and then the stability degree of the target video is determined according to the comparison result, where the characteristic parameter value is inversely related to the stability degree. Therefore, the stability of the video is automatically analyzed based on the characteristic parameter value capable of representing the fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in a large batch is met.
The video stability determination apparatus 500 in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video stability determination device 500 in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The video stability determining apparatus 500 provided in the second aspect of the present application can implement each process implemented in the method embodiment of fig. 1, and for avoiding repetition, details are not repeated here.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 602 and a memory 604, where the memory 604 stores a program or an instruction that can be executed on the processor 602, and when the program or the instruction is executed by the processor 602, the steps of the embodiment of the video stability determination method in the first aspect described above are implemented, and the same technical effects can be achieved, and are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The electronic device 700 of the embodiment of the present application may be configured to implement the steps of the above-described embodiment of the video stability determination method of the first aspect.
The processor 710 is configured to acquire optical flow feature points of the target video.
The processor 710 is further configured to determine a motion characteristic curve of the target video according to the pixel positions of the optical flow characteristic points in each frame of image of the target video.
The processor 710 is further configured to determine a characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve, where the characteristic parameter value is used to indicate a degree of fluctuation of the motion characteristic curve.
The processor 710 is further configured to determine the stability of the target video according to the characteristic parameter values.
In the embodiment of the application, the optical flow characteristic points of the target video are obtained, and the motion characteristic curve of the target video is determined according to the pixel positions of the optical flow characteristic points in each frame of image of the target video. On the basis, according to the target model and the motion characteristic curve, the characteristic parameter value of the motion characteristic curve is determined, and then the stability of the target video is determined according to the characteristic parameter value. Wherein, the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve. That is, according to the video stability determining method, when the stability of the target video is evaluated, the motion path of the target video is determined based on the pixel position of the optical flow feature point in the target video in each frame image of the target video to obtain the motion feature curve of the target video, and then the feature parameter value for indicating the fluctuation degree of the motion feature curve is determined according to the motion feature curve of the target video and the target model, and the stability of the target video is evaluated according to the feature parameter value. Therefore, the stability of the video can be automatically analyzed based on the motion characteristic information of the video, namely the characteristic parameter value capable of representing the motion fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in batches is met.
Optionally, the processor 710 is further configured to: and smoothing the motion characteristic curve to remove noise information in the motion characteristic curve.
According to the embodiment provided by the application, after the motion characteristic curve of the target video is determined, the motion characteristic curve is subjected to smoothing processing so as to remove noise information in the motion characteristic curve. Therefore, in the subsequent process of evaluating the stability of the target video according to the motion characteristic curve, noise information in the motion characteristic curve is prevented from interfering the evaluation result, and the accuracy of evaluating the stability of the target video is ensured.
Optionally, the processor 710 is specifically configured to: acquiring a characteristic coordinate value corresponding to each frame of image in the motion characteristic curve; weighting the feature coordinate values corresponding to two adjacent frames of images to obtain target coordinate values; and taking the target coordinate value as a characteristic coordinate value corresponding to the next frame of image in the two adjacent frames of images.
In the embodiment provided by the application, the feature coordinate value corresponding to each frame of image in the motion feature curve is obtained, then the feature coordinate values corresponding to two adjacent frames of images are weighted to obtain the target coordinate value, and the target coordinate value is used as the feature coordinate value corresponding to the next frame of image in the two adjacent frames of images, so that the motion feature curve is smoothed. Therefore, the motion characteristic curve is smoothed in a manner of weighting the curve coordinate values of the motion characteristic curve corresponding to each two continuous frames of images, so that noise information in the motion characteristic curve is removed, the noise information in the motion characteristic curve is prevented from interfering the evaluation result of the stability of the target video, and the accuracy of evaluating the stability of the target video is ensured.
Optionally, the processor 710 is specifically configured to: determining a target transformation matrix and a target offset between every two adjacent frames of images in the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video; and determining a motion characteristic curve according to the target transformation matrix and the target offset.
According to the embodiment provided by the application, the target transformation matrix and the target offset between every two adjacent frames of images in the target video are determined according to the pixel position of the optical flow characteristic point in each frame of image of the target video, and then the motion characteristic curve is determined according to the determined target transformation matrix and the determined target offset. In this way, an affine mapping function of the optical flow feature points, that is, a motion feature curve of the target video, is determined based on the affine transformation model, so as to represent the motion features of the target video, so that the stability degree of the target video is analyzed by analyzing the motion features of the target video in the following.
Optionally, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, where the first characteristic curve is a characteristic curve component of the target video in a first direction, and the second characteristic curve is a characteristic curve component of the target video in a second direction, and the processor 710 is specifically configured to: acquiring a first angle value between three coordinate points corresponding to every three adjacent frames of images in the first characteristic curve through the target model, wherein the first angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the first characteristic curve; acquiring a second angle value between three coordinate points corresponding to every three adjacent frames of images in the second characteristic curve through the target model, wherein the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve; and determining a characteristic parameter value according to the first angle value and the second angle value.
In the above embodiments provided by the present application, the motion characteristic curve includes a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in the first direction, and the second characteristic curve is a characteristic curve component of the target video in the second direction. On the basis, the target audio is generated according to the first intermediate signal, the second intermediate signal and the third intermediate signal. Therefore, a first angle value between three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve is obtained through the target model, each first angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the first characteristic curve, a second angle value between the three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve is obtained through the target model, the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve, and the characteristic parameter value is determined according to the first angle value and the second angle value. Therefore, the fluctuation degree of the motion characteristic curve of the target video is analyzed based on the angle change rules of the motion path of the target video in the first direction and the second direction, namely the motion change rule of the target video is analyzed, and the accuracy of the stability analysis of the target video is ensured.
Optionally, the processor 710 is specifically configured to: comparing the characteristic parameter value with at least one target threshold value; determining the stability degree of the target video according to the comparison result; wherein the characteristic parameter value is inversely related to the degree of stability.
In the foregoing embodiment provided by the present application, when determining the stability of the target video according to the characteristic parameter value, specifically, the characteristic parameter value is compared with at least one target threshold, and then the stability degree of the target video is determined according to the comparison result, where the characteristic parameter value is inversely related to the stability degree. Therefore, the stability of the video is automatically analyzed based on the characteristic parameter value capable of representing the fluctuation degree or the jitter degree of the video without manual review, the evaluation efficiency is ensured, the comprehensiveness and the accuracy of the evaluation result are improved, and the requirement of evaluating the stability of the video in a large batch is met.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 709 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 710.
An embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the process of the embodiment of the video stability determination method according to the first aspect is implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media such as computer read only memory ROM, random access memory RAM, magnetic or optical disks, and the like.
An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the video stability determination method embodiment of the first aspect, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the embodiment of the video stability determination method according to the first aspect, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes several instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for video stabilization determination, comprising:
acquiring optical flow characteristic points of a target video;
determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video;
determining a characteristic parameter value of the motion characteristic curve according to a target model and the motion characteristic curve, wherein the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve;
and determining the stability of the target video according to the characteristic parameter value.
2. The video stability determination method of claim 1, wherein after the determining the motion profile of the target video, the video stability determination method further comprises:
smoothing the motion characteristic curve to remove noise information in the motion characteristic curve;
the smoothing of the motion characteristic curve includes:
acquiring a characteristic coordinate value corresponding to each frame of image in the motion characteristic curve;
weighting the feature coordinate values corresponding to two adjacent frames of images to obtain target coordinate values;
and taking the target coordinate value as a characteristic coordinate value corresponding to the next frame of image in the two adjacent frames of images.
3. The method for determining video stability according to claim 1, wherein the determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video comprises:
determining a target transformation matrix and a target offset between every two adjacent frames of images in the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video;
and determining the motion characteristic curve according to the target transformation matrix and the target offset.
4. The method according to claim 3, wherein the motion characteristic curve comprises a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in a first direction, the second characteristic curve is a characteristic curve component of the target video in a second direction, and the determining the characteristic parameter value of the motion characteristic curve according to the target model and the motion characteristic curve comprises:
acquiring first angle values among three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve through the target model, wherein each first angle value is used for indicating the fluctuation degree among the three corresponding coordinate points in the first characteristic curve;
acquiring a second angle value between three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve through the target model, wherein the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve;
and determining the characteristic parameter value according to the first angle value and the second angle value.
5. The method according to any one of claims 1 to 4, wherein the determining the stability of the target video according to the characteristic parameter value comprises:
comparing the characteristic parameter value to at least one target threshold value;
determining the stability degree of the target video according to the comparison result;
wherein the characteristic parameter value is inversely related to the degree of stability.
6. A video stabilization determination apparatus, comprising:
an acquisition unit configured to acquire optical flow feature points of a target video;
the first processing unit is used for determining a motion characteristic curve of the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video;
the second processing unit is used for determining a characteristic parameter value of the motion characteristic curve according to a target model and the motion characteristic curve, wherein the characteristic parameter value is used for indicating the fluctuation degree of the motion characteristic curve;
and the third processing unit is used for determining the stability of the target video according to the characteristic parameter value.
7. The video stability determination device of claim 6, wherein the video stability determination device further comprises:
the fourth processing unit is used for carrying out smoothing processing on the motion characteristic curve and removing noise information in the motion characteristic curve;
the fourth processing unit is specifically configured to:
acquiring a characteristic coordinate value corresponding to each frame of image in the motion characteristic curve;
weighting the feature coordinate values corresponding to two adjacent frames of images to obtain target coordinate values;
and taking the target coordinate value as a characteristic coordinate value corresponding to the next frame of image in the two adjacent frames of images.
8. The video stability determination device of claim 6, wherein the first processing unit is specifically configured to:
determining a target transformation matrix and a target offset between every two adjacent frames of images in the target video according to the pixel position of the optical flow characteristic point in each frame of image of the target video;
and determining the motion characteristic curve according to the target transformation matrix and the target offset.
9. The video stability determination device according to claim 8, wherein the motion characteristic curve comprises a first characteristic curve and a second characteristic curve, the first characteristic curve is a characteristic curve component of the target video in a first direction, the second characteristic curve is a characteristic curve component of the target video in a second direction, and the second processing unit is specifically configured to:
acquiring a first angle value between three coordinate points corresponding to every three frames of adjacent images in the first characteristic curve through the target model, wherein the first angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the first characteristic curve;
acquiring a second angle value between three coordinate points corresponding to every three frames of adjacent images in the second characteristic curve through the target model, wherein the second angle value is used for indicating the fluctuation degree between the three corresponding coordinate points in the second characteristic curve;
and determining the characteristic parameter value according to the first angle value and the second angle value.
10. The video stability determination device according to any one of claims 6 to 9, wherein the third processing unit is specifically configured to:
comparing the characteristic parameter value to at least one target threshold value;
determining the stability degree of the target video according to the comparison result;
wherein the characteristic parameter value is inversely related to the degree of stability.
CN202211470833.5A 2022-11-23 2022-11-23 Video stability determination method and device Pending CN115866240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211470833.5A CN115866240A (en) 2022-11-23 2022-11-23 Video stability determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211470833.5A CN115866240A (en) 2022-11-23 2022-11-23 Video stability determination method and device

Publications (1)

Publication Number Publication Date
CN115866240A true CN115866240A (en) 2023-03-28

Family

ID=85665188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211470833.5A Pending CN115866240A (en) 2022-11-23 2022-11-23 Video stability determination method and device

Country Status (1)

Country Link
CN (1) CN115866240A (en)

Similar Documents

Publication Publication Date Title
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN112308797B (en) Corner detection method and device, electronic equipment and readable storage medium
CN112561846A (en) Method and device for training image fusion model and electronic equipment
CN114390201A (en) Focusing method and device thereof
CN113489909B (en) Shooting parameter determining method and device and electronic equipment
Li et al. A full-process optimization-based background subtraction for moving object detection on general-purpose embedded devices
CN112367486B (en) Video processing method and device
CN113628259A (en) Image registration processing method and device
CN114998814B (en) Target video generation method and device, computer equipment and storage medium
CN115866240A (en) Video stability determination method and device
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
US9215474B2 (en) Block-based motion estimation method
CN115660969A (en) Image processing method, model training method, device, equipment and storage medium
Wu et al. Locally low-rank regularized video stabilization with motion diversity constraints
JP5683153B2 (en) Image processing apparatus and image processing method
CN114782280A (en) Image processing method and device
CN111754417A (en) Noise reduction method and device for video image, video matting method and device and electronic system
CN116342992A (en) Image processing method and electronic device
CN117541507A (en) Image data pair establishing method and device, electronic equipment and readable storage medium
Li et al. [Retracted] Machine‐Type Video Communication Using Pretrained Network for Internet of Things
CN115100236A (en) Video processing method and device
CN117119292A (en) Image processing method and device
CN114708168A (en) Image processing method and electronic device
CN115174811A (en) Camera shake detection method, device, equipment, storage medium and program product
CN116309130A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination