CN116260928A - Visual optimization method based on intelligent frame insertion - Google Patents

Visual optimization method based on intelligent frame insertion Download PDF

Info

Publication number
CN116260928A
CN116260928A CN202310538891.5A CN202310538891A CN116260928A CN 116260928 A CN116260928 A CN 116260928A CN 202310538891 A CN202310538891 A CN 202310538891A CN 116260928 A CN116260928 A CN 116260928A
Authority
CN
China
Prior art keywords
video
frames
frame
value
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310538891.5A
Other languages
Chinese (zh)
Other versions
CN116260928B (en
Inventor
邓正秋
吕绍和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Original Assignee
Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Malanshan Video Advanced Technology Research Institute Co ltd filed Critical Hunan Malanshan Video Advanced Technology Research Institute Co ltd
Priority to CN202310538891.5A priority Critical patent/CN116260928B/en
Publication of CN116260928A publication Critical patent/CN116260928A/en
Application granted granted Critical
Publication of CN116260928B publication Critical patent/CN116260928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0137Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes dependent on presence/absence of motion, e.g. of motion zones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0142Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being edge adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Television Systems (AREA)

Abstract

The invention relates to the technical field of image frame processing, in particular to a visual optimization method based on intelligent frame interpolation.

Description

Visual optimization method based on intelligent frame insertion
Technical Field
The invention relates to the technical field of image frame processing, in particular to a visual optimization method based on intelligent frame insertion.
Background
The frame insertion is to add one frame in every two frames of video frames of the original video display, shorten the display time between each frame, improve the video frame rate, correct the illusion formed by the visual persistence of human eyes, effectively improve the stability of the picture, make the picture smoother and make the detail display clearer.
The patent application with publication number of CN114205648A discloses the following, and an embodiment of the application provides a frame inserting method and device, wherein the method comprises the following steps: an interpolation position is determined between the first image frame and the second image frame. Deformation information between the first image frame and the second image frame, and occlusion information between the first image frame and the second image frame are determined from the first image frame and the second image frame. And determining an intermediate image frame between the first image frame and the second image frame according to the frame inserting position, the deformation information and the first image frame and the second image frame. And determining the frame inserting image frame corresponding to the frame inserting position according to the shielding information and the intermediate image frame, and inserting the frame inserting image frame at the frame inserting position. And processing each frame inserting position according to the deformation information and the shielding information between the image frames to determine the frame inserting image frames corresponding to each frame inserting position between the first image frame and the second image frame, so that the frame inserting effect of frame inserting on the image frames can be effectively ensured.
However, the prior art has the following problems:
in the prior art, the frame number inserted during frame insertion is not considered under different motion states of an object in a video, and is adjusted according to the moving speed of the object of an adjacent video frame and the change condition of the brightness state of the adjacent video frame, so that the frame insertion efficiency and the visual effect of the video after the pin insertion are improved.
Disclosure of Invention
In order to solve the above problems, the present invention provides a visual optimization method based on intelligent frame insertion, which includes:
step S1, acquiring a video to be optimized, dividing the video to be optimized into a plurality of video segments, wherein each video segment at least comprises three video frames;
step S2, determining image parameters of adjacent video frames in each video segment, correspondingly calculating image parameter characterization values, and determining a smooth state of the video segment based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein the smooth state comprises a first smooth state and a second smooth state;
step S3, determining an optimization mode when the video segment is subjected to frame inserting optimization based on the smooth state of the video segment, wherein the first frame inserting optimization mode is adopted when the video segment is in a first smooth state, and the second frame inserting optimization mode is adopted when the video segment is in a second smooth state;
the first frame inserting optimization mode is that feature contours in all video frames of the video segment are determined, and the frame inserting quantity is adjusted to insert frames for adjacent video frames based on displacement difference amounts of the feature contours in all adjacent video frames in the video segment;
and the second frame inserting optimization mode is to determine the image brightness representation value of each video frame of the video segment, adjust the frame inserting quantity based on the difference quantity of the image brightness representation value in each adjacent video frame in the video segment, and insert frames to the adjacent video frames.
Further, in the step S2, image parameters of adjacent video frames in each video segment are determined, where the image parameters include a contrast value R, a brightness value B of the video frame, and an area S of an object contour in the video frame.
Further, in the step S2, an image parameter characterization value K of the video frame is calculated according to the formula (1),
Figure SMS_1
in the formula (1), R0 represents a preset contrast ratio parameter, B0 represents a preset brightness ratio parameter, and S0 represents a preset contour area ratio parameter.
Further, in the step S2, a smooth state of the video segment is determined based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein,
calculating the average value of the differences of the image parameter characterization values corresponding to the adjacent video frames, comparing the average value with a preset image difference comparison threshold value, and determining the smooth state of the video segment according to the comparison result,
if the comparison result meets the first condition, determining the smooth state of the video segment as a first smooth state;
if the comparison result meets the second condition, determining the smooth state of the video segment as a second smooth state;
the first condition is that the average value is larger than or equal to the image difference value comparison threshold value, and the second condition is that the average value is smaller than the image difference value comparison threshold value.
Further, in the step S3, a feature profile in each video frame of the video segment is determined, wherein,
determining displacement difference amount based on the contour center coordinates of the same object contour in adjacent video frames, comparing the displacement difference amount with a preset first displacement difference comparison threshold value, and determining whether the object contour in the video frame is a characteristic contour according to a comparison result, wherein,
under the first displacement contrast condition, determining the object contour in the video frame as a characteristic contour;
the first displacement comparison condition is that the displacement difference amount is larger than or equal to the first displacement difference comparison threshold value.
Further, in the step S3, a displacement difference is determined based on the coordinates of the contour center of the same object contour in the adjacent video frames, wherein a rectangular coordinate system is established with the center point of each video frame as the origin, the coordinates of the contour center point corresponding to the same object contour in the adjacent video frames are determined, the displacement difference D of the object contour in the adjacent video frames is calculated according to the formula (2),
Figure SMS_2
in the formula (2), Y2 represents a Y-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, Y1 represents a Y-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames, X2 represents an X-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, and X1 represents an X-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames.
Further, in the step S3, the number of interpolation frames is adjusted based on the displacement difference of the feature contours in each adjacent video frame in the video segment to interpolate the adjacent video frame, wherein,
comparing displacement difference amounts corresponding to the object contours determined as the characteristic contours in the adjacent video frames with a preset second displacement difference comparison threshold and a third displacement difference comparison threshold, wherein the first displacement difference comparison threshold is smaller than the second displacement difference comparison threshold and the second displacement difference comparison threshold is smaller than the third displacement difference comparison threshold, and adjusting the frame inserting number of the adjacent video frames according to comparison results,
under the second displacement comparison condition, increasing the number of the inserted frames to insert frames into the adjacent video frames;
under the third displacement comparison condition, reducing the number of the inserted frames to insert frames of the adjacent video frames;
under the fourth displacement comparison condition, the number of the inserted frames is not required to be adjusted;
the second displacement comparison condition is that the displacement difference amount is larger than or equal to the third displacement difference comparison threshold value, the third displacement comparison condition is that the displacement difference amount is larger than or equal to the first displacement difference comparison threshold value and the displacement difference amount is smaller than or equal to the second displacement difference comparison threshold value, and the fourth displacement comparison condition is that the displacement difference amount is larger than the second displacement difference comparison threshold value and the displacement difference amount is smaller than the third displacement difference comparison threshold value.
Further, in the step S3, an image brightness characterization value of each video frame of the video segment is determined, wherein,
the image intensity characterization value E of the video frame is calculated according to equation (3),
Figure SMS_3
in the formula (3), R represents a contrast value of the video frame, B represents a brightness value of the video frame, R0 represents a preset contrast parameter, and B0 represents a preset brightness parameter.
Further, in the step S3, the number of frames to be inserted is adjusted based on the difference of the image brightness characterization value in each adjacent video frame in the video segment, wherein,
determining the difference of the image brightness representation values of all video frames in adjacent video frames, comparing the difference with a preset first graph brightness difference comparison threshold value and a second graph brightness difference comparison threshold value,
under the comparison result of the first brightness characterization value, increasing the number of the inserted frames to insert frames of the adjacent video frames;
reducing the number of the inserted frames to insert frames into the adjacent video frames under the comparison result of the second brightness characterization value;
under the comparison result of the third brightness representation value, the number of the inserted frames is not required to be adjusted;
the first brightness representation value comparison result is that the difference amount is larger than or equal to the second graph brightness difference comparison threshold value, the second brightness representation value comparison result is that the difference amount is smaller than or equal to the first graph brightness difference comparison threshold value, the third brightness representation value comparison result is that the difference amount is larger than the first graph brightness difference comparison threshold value and the difference amount is smaller than the second graph brightness difference comparison threshold value, and the first graph brightness difference comparison threshold value is smaller than the second graph brightness difference comparison threshold value.
Further, in the step S3, the inserting the adjacent video frames includes inserting an insertion frame between the adjacent video frames, where the insertion frame is generated based on an insertion frame model.
Compared with the prior art, the method and the device have the advantages that the video to be optimized is divided into a plurality of video segments, the smooth state of the video segments is determined based on the difference value of the image parameter representation values corresponding to the adjacent video frames, the characteristic contours in the video frames of the video segments are determined when the video segments are in the first smooth state, the adjacent video frames are inserted by adjusting the number of the inserted frames based on the displacement difference value of the characteristic contours in the adjacent video frames in the video segments, the image brightness representation values of the video frames of the video segments are determined when the video segments are in the second smooth state, the adjacent video frames are inserted by adjusting the number of the inserted frames based on the difference value of the image brightness representation values of the adjacent video frames, the efficiency of the adjacent video frames is guaranteed, the smooth effect of the video segments is effectively improved, and the visual effect of the video segments after the frame insertion is guaranteed.
In particular, in the invention, the smooth state of the video segment is determined based on the average value of the differences of the image parameter characterization values corresponding to the adjacent video frames in the video segment, the image parameter characterization values are calculated by the contrast values and brightness values of the video frames and the areas of the object outlines in the video frames, the image parameter characterization values can characterize the differences among the frames of the video segment, the larger the image parameter characterization values are, the larger the differences among the video frames are, the smooth state is determined based on the differences, the smaller the calculated amount is, and the basis is provided for the subsequent data processing.
In particular, in the invention, when the video segment is in the first smooth state, the object contour of which the displacement difference amount is larger than the preset first displacement difference comparison threshold value in the same object contour in the adjacent video frames is determined as the characteristic contour, in the actual situation, when the video segment is in the first smooth state with lower smooth effect, the difference of the video frames in the video segment is larger, in this case, the main factor affecting the visual effect is that the object moves, therefore, the invention determines the object contour of which the motion state is in the video frame and the moving speed is larger than a certain value, and correspondingly determines the displacement difference amount, the displacement difference amount is represented by the distance between the coordinates of the contour centers of the same object contour, in the actual situation, the object of the same object contour in the adjacent video frame is the same object, therefore, the displacement difference amount represents the moving distance of the object, and the larger the distance of the object movement indicates that the object has larger visual effect, therefore, the frame inserting effect is scientifically adjusted according to the displacement difference amount of the characteristic contour in the adjacent video segment, and the frame inserting efficiency is improved after the video frame inserting effect is scientifically ensured.
In particular, in the invention, when the video segment is in the second smooth state, the number of the inserted frames is adjusted based on the difference of the image brightness characterization values of each adjacent video frame, the image brightness characterization values are calculated by the contrast values and the brightness values of the video frames, in practical situations, when the video segment is in the second smooth state with higher smooth effect, that is, the motion speed of an object in a motion state of each adjacent video frame of the video segment is slower, at this time, the main factors influencing the smoothness of the video segment are the difference of the brightness and the contrast of the adjacent video frames, and when the adjacent video frames are inserted, the number of the inserted frames is adaptively adjusted according to the difference of the brightness values and the contrast values of the adjacent video frames, therefore, when the video segment is in the second smooth state, the number of the inserted frames is adjusted according to the difference of the image brightness characterization values of the adjacent video frames, thereby ensuring the efficiency of the inserted frames of the adjacent video frames and improving the effect of the video segment after the inserted frames.
Drawings
FIG. 1 is a schematic diagram of steps of a visual optimization method based on intelligent frame interpolation according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of the displacement difference of feature contours in adjacent video frames according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, which is a schematic diagram illustrating steps of a visual optimization method based on intelligent frame insertion according to an embodiment of the present invention, the visual optimization method based on intelligent frame insertion of the present invention includes:
step S1, acquiring a video to be optimized, dividing the video to be optimized into a plurality of video segments, wherein each video segment at least comprises three video frames;
step S2, determining image parameters of adjacent video frames in each video segment, correspondingly calculating image parameter characterization values, and determining a smooth state of the video segment based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein the smooth state comprises a first smooth state and a second smooth state;
step S3, determining an optimization mode when the video segment is subjected to frame inserting optimization based on the smooth state of the video segment, wherein the first frame inserting optimization mode is adopted when the video segment is in a first smooth state, and the second frame inserting optimization mode is adopted when the video segment is in a second smooth state;
the first frame inserting optimization mode is that feature contours in all video frames of the video segment are determined, and the frame inserting quantity is adjusted to insert frames for adjacent video frames based on displacement difference amounts of the feature contours in all adjacent video frames in the video segment;
and the second frame inserting optimization mode is to determine the image brightness representation value of each video frame of the video segment, adjust the frame inserting quantity based on the difference quantity of the image brightness representation value in each adjacent video frame in the video segment, and insert frames to the adjacent video frames.
Specifically, the video to be optimized in the invention is a VFR video, and it should be understood by those skilled in the art that the time interval of each frame in the VFR video may be different, and such video sources are often from movies and game scenes, and for the VFR video, a variable frame number is often used to insert frames, and different numbers of frames can be inserted between adjacent frames as required to balance the smoothness and the change of the frame rate of the video so as to adapt to the content and specific application scenes of the video.
Specifically, the specific mode of acquiring the image parameters in the video frame is not limited, and the method can lead the pre-trained data model capable of identifying the brightness value, the contrast value and the area of the object outline of the video frame into the logic part, so that the logic part can complete the function of acquiring the image parameters in the video frame, and the logic part is a field programmable part, a computer and a microprocessor in the computer, which is the prior art and is not repeated here.
Specifically, the invention does not limit the specific frame inserting mode for inserting frames between adjacent video frames, and the frame inserting of video has been widely applied in the technical field of image frame processing, and will not be described here again
Specifically, in the step S2, image parameters of adjacent video frames in each video segment are determined, where the image parameters include a contrast value R, a brightness value B of the video frame, and an area S of an object contour in the video frame.
Specifically, in the step S2, an image parameter characterization value K of the video frame is calculated according to a formula (1),
Figure SMS_4
in the formula (1), R0 represents a preset contrast ratio parameter, B0 represents a preset brightness ratio parameter, and S0 represents a preset contour area ratio parameter.
Specifically, R0 is an average value of contrast values of all video frames in the video to be optimized, B0 is an average value of brightness values of all video frames in the video to be optimized, and S0 is an average value of areas of object outlines in all video frames in the video to be optimized.
Specifically, in the step S2, the smooth state of the video segment is determined based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein,
calculating an average value delta Kp of the difference delta K of the image parameter characterization values corresponding to the adjacent video frames, setting delta K=K2-K1, wherein K2 represents the image parameter characterization value of the next video frame in the adjacent video frames, K1 represents the image parameter characterization value of the previous video frame in the adjacent video frames, and setting
Figure SMS_5
Wherein DeltaK i Representing the difference value of the image parameter representation value corresponding to the ith adjacent video frame, n represents the number of the adjacent video frames in the video segment, i represents an integer greater than 0, comparing the average value delta Kp with a preset image difference value comparison threshold delta Kp0, wherein delta Kp0 is greater than 0, determining the smooth state of the video segment according to the comparison result,
if the comparison result meets the first condition, determining the smooth state of the video segment as a first smooth state;
if the comparison result meets the second condition, determining the smooth state of the video segment as a second smooth state;
wherein the first condition is DeltaKp not less than DeltaKp 0, and the second condition is DeltaKp < DeltaKp0.
Specifically, the image difference value contrast threshold Δkp0 is calculated based on an average value of differences of image parameter characterization values of each adjacent video frame in the video to be optimized.
Specifically, in the invention, the smooth state of the video segment is determined based on the average value of the differences of the image parameter characterization values corresponding to the adjacent video frames in the video segment, the image parameter characterization values are calculated by the contrast values and brightness values of the video frames and the areas of the object outlines in the video frames, the image parameter characterization values can characterize the differences among the frames of the video segment, the larger the image parameter characterization values are, the larger the differences among the video frames are, the smooth state is determined based on the differences, the smaller the calculated amount is, and the basis is provided for the subsequent data processing.
Specifically, in the step S3, a feature contour in each video frame of the video segment is determined, wherein,
determining a displacement difference amount D based on the contour center coordinates of the same object contour in adjacent video frames, comparing the displacement difference amount with a preset first displacement difference comparison threshold D1, wherein D1 is more than 0, determining whether the object contour in the video frames is a characteristic contour according to the comparison result,
under the first displacement contrast condition, determining the object contour in the video frame as a characteristic contour;
wherein the first displacement comparison condition is D not less than D1.
Specifically, in the invention, when the video segment is in the first smooth state, the object contour with the displacement difference amount larger than the preset first displacement difference comparison threshold value in the same object contour in the adjacent video frames is determined as the characteristic contour, in the actual situation, when the video segment is in the first smooth state with lower smooth effect, the difference of the video frames in the video segment is larger, in this case, the main factor affecting the visual effect is that the object moves, therefore, the invention determines the object contour of the object with the motion state and the movement speed larger than a certain value in the video frame, and correspondingly determines the displacement difference amount, the displacement difference amount is represented by the distance between the coordinates of the contour centers of the same object contour, in the actual situation, the object with the same object contour in the adjacent video frame is the same object, therefore, the displacement difference amount represents the distance of the object movement, and the larger the distance of the object movement indicates the faster the speed of the object, the larger influence on the visual effect, therefore, the frame interpolation amount of the adjacent video frames is scientifically adjusted according to the displacement difference amount of the characteristic contour in the adjacent video segment, and the frame interpolation efficiency of the adjacent video frames is improved after the frame interpolation is carried out.
Specifically, referring to fig. 2, in the step S3, a displacement difference is determined based on the coordinates of the contour center of the same object contour in the adjacent video frames, wherein a rectangular coordinate system is established with the center point of each video frame as the origin, the coordinates of the contour center point corresponding to the same object contour in the adjacent video frames are determined, the displacement difference D of the object contour in the adjacent video frames is calculated according to the formula (2),
Figure SMS_6
(2)
in the formula (2), Y2 represents a Y-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, Y1 represents a Y-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames, X2 represents an X-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, and X1 represents an X-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames.
Specifically, in the step S3, the number of interpolation frames is adjusted based on the displacement difference of the feature contours in each adjacent video frame in the video segment to interpolate the adjacent video frame, wherein,
comparing displacement difference D corresponding to the object profile determined as the characteristic profile in the adjacent video frames with a preset second displacement difference comparison threshold D2 and a third displacement difference comparison threshold D3, wherein D1 is more than 0 and D2 is less than D3,
under the second displacement comparison condition, increasing the number of the inserted frames to insert frames into the adjacent video frames;
under the third displacement comparison condition, reducing the number of the inserted frames to insert frames of the adjacent video frames;
under the fourth displacement comparison condition, the number of the inserted frames is not required to be adjusted;
the second displacement comparison condition is that the displacement difference amount is larger than or equal to the third displacement difference comparison threshold value, the third displacement comparison condition is that the displacement difference amount is larger than or equal to the first displacement difference comparison threshold value and the displacement difference amount is smaller than or equal to the second displacement difference comparison threshold value, and the fourth displacement comparison condition is that the displacement difference amount is larger than the second displacement difference comparison threshold value and the displacement difference amount is smaller than the third displacement difference comparison threshold value.
Specifically, the first displacement difference comparison threshold D1, the second displacement difference comparison threshold D2, and the third displacement difference comparison threshold D3 are calculated based on the average value of the displacement difference amounts of the objects in motion in the adjacent video frames of the video to be optimized, and in this embodiment, d1= 0.8D0, d2= 1.2D0, d3= 1.4D0, and D0 are set to represent the average value of the displacement difference amounts of the objects in motion in the adjacent video frames of the video to be optimized.
Specifically, in the invention, when a video segment is in a first smooth state, the number of frames to be inserted is adjusted based on the displacement difference of characteristic contours in each adjacent video frame in the video segment, in practical situations, when the video segment is in the first smooth state with lower smooth effect, namely, the moving speed of an object in a moving state of each adjacent video frame in the video segment is faster, and the moving speed of the object is faster, in order to ensure the smoothness of the video segment, the number of frames to be inserted is more when the adjacent video frames are inserted, therefore, when the video segment is in the first smooth state, the number of frames to be inserted for the adjacent video frames is adjusted based on the displacement difference of characteristic contours in the adjacent video segment, thereby ensuring the efficiency of inserting frames for the adjacent video frames and improving the smooth effect of the video segment after inserting frames.
Specifically, in the step S3, an image brightness characterization value of each video frame of the video segment is determined, wherein,
the image intensity characterization value E of the video frame is calculated according to equation (3),
Figure SMS_7
(3),
in the formula (3), R represents a contrast value of the video frame, B represents a brightness value of the video frame, R0 represents a preset contrast parameter, and B0 represents a preset brightness parameter.
Specifically, in the step S3, the number of interpolation frames is adjusted based on the difference of the image brightness characteristic values in each adjacent video frame in the video segment, wherein,
determining the difference quantity of the image brightness representation values of all video frames in adjacent video frames, comparing the difference quantity with a preset first graph brightness difference comparison threshold delta E1 and a second graph brightness difference comparison threshold delta E2,
under the comparison result of the first brightness characterization value, increasing the number of the inserted frames to insert frames of the adjacent video frames;
reducing the number of the inserted frames to insert frames into the adjacent video frames under the comparison result of the second brightness characterization value;
under the comparison result of the third brightness representation value, the number of the inserted frames is not required to be adjusted;
the first brightness representation value comparison result is that the difference amount is larger than or equal to the second graph brightness difference comparison threshold value, the second brightness representation value comparison result is that the difference amount is smaller than or equal to the first graph brightness difference comparison threshold value, the third brightness representation value comparison result is that the difference amount is larger than the first graph brightness difference comparison threshold value and the difference amount is smaller than the second graph brightness difference comparison threshold value, and the first graph brightness difference comparison threshold value is smaller than the second graph brightness difference comparison threshold value.
Specifically, the first graphics luminance difference contrast threshold Δe1 and the second graphics luminance difference contrast threshold Δe2 are calculated based on the average value of the difference amounts of the image luminance characterization values of the adjacent video frames of the video to be optimized, and Δe1=0.8 Δe0, Δe2=1.2 Δe0, and Δe0 represents the average value of the difference amounts of the image luminance characterization values of the adjacent video frames of the video to be optimized.
The number of reduced and increased frames is not particularly limited in the present invention, and the number of reduced and increased frames may be set based on a predetermined initial frame number, where n0=n×α, N0 represents the number of frames inserted for achieving the frame insertion effect, N represents the initial frame number, α represents the adjustment coefficient, and in order to avoid the adjustment amount being too large and to be able to characterize the adjustment effect, a person skilled in the art may select the value of the adjustment coefficient within [0.33-0.5 ].
In the present embodiment, the set section for the initial interpolation frame number is [3,8], and adjacent video frames are interpolated by the initial interpolation frame number when the interpolation frame number is not adjusted.
Specifically, in the invention, when the video segment is in the second smooth state, the number of the inserted frames is adjusted based on the difference of the image brightness characterization values of each adjacent video frame, the image brightness characterization values are calculated by the contrast values and the brightness values of the video frames, in the practical situation, when the video segment is in the second smooth state with higher smooth effect, that is, the motion speed of an object in a motion state of each adjacent video frame of the video segment is slower, at this time, the main factors influencing the smoothness of the video segment are the difference of the brightness and the contrast of the adjacent video frame, and the number of the inserted frames is adaptively adjusted according to the difference of the brightness values and the contrast values of the adjacent video frames when the adjacent video frame is inserted, so that when the video segment is in the second smooth state, the number of the inserted frames of the adjacent video frame is adjusted according to the difference of the image brightness characterization values of the adjacent video frame, thereby ensuring the efficiency of the inserted frames of the adjacent video frame and improving the effect of the video segment after the inserted frames.
Specifically, the step S3 includes inserting an insertion frame between adjacent video frames when inserting frames of the adjacent video frames, where the insertion frame is generated based on an insertion frame model.
Specifically, the invention does not limit the specific form of the frame inserting model, and the frame inserting model based on the deep learning in the prior art, such as SRCNN model, ESPCN model, EDVR model and the like, can be used for generating high-quality frame inserting by learning the space-time information of the video by adopting a deep learning method through the models.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.

Claims (10)

1. The visual optimization method based on intelligent frame insertion is characterized by comprising the following steps of:
step S1, acquiring a video to be optimized, dividing the video to be optimized into a plurality of video segments, wherein each video segment at least comprises three video frames;
step S2, determining image parameters of adjacent video frames in each video segment, correspondingly calculating image parameter characterization values, and determining a smooth state of the video segment based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein the smooth state comprises a first smooth state and a second smooth state;
step S3, determining an optimization mode when the video segment is subjected to frame inserting optimization based on the smooth state of the video segment, wherein the first frame inserting optimization mode is adopted when the video segment is in a first smooth state, and the second frame inserting optimization mode is adopted when the video segment is in a second smooth state;
the first frame inserting optimization mode is that feature contours in all video frames of the video segment are determined, and the frame inserting quantity is adjusted to insert frames for adjacent video frames based on displacement difference amounts of the feature contours in all adjacent video frames in the video segment;
and the second frame inserting optimization mode is to determine the image brightness representation value of each video frame of the video segment, adjust the frame inserting quantity based on the difference quantity of the image brightness representation value in each adjacent video frame in the video segment, and insert frames to the adjacent video frames.
2. The intelligent frame inserting-based visual optimization method according to claim 1, wherein in the step S2, image parameters of adjacent video frames in each video segment are determined, wherein the image parameters include a contrast value R, a brightness value B of a video frame, and an area S of an object contour in the video frame.
3. The intelligent frame inserting-based visual optimization method according to claim 2, wherein in the step S2, the image parameter characterization value K of the video frame is calculated according to the formula (1),
Figure QLYQS_1
in the formula (1), R0 represents a preset contrast ratio parameter, B0 represents a preset brightness ratio parameter, and S0 represents a preset contour area ratio parameter.
4. The intelligent frame inserting-based visual optimization method according to claim 3, wherein in the step S2, the smooth state of the video segment is determined based on the difference value of the image parameter characterization values corresponding to the adjacent video frames, wherein,
calculating the average value of the differences of the image parameter characterization values corresponding to the adjacent video frames, comparing the average value with a preset image difference comparison threshold value, and determining the smooth state of the video segment according to the comparison result,
if the comparison result meets the first condition, determining the smooth state of the video segment as a first smooth state;
if the comparison result meets the second condition, determining the smooth state of the video segment as a second smooth state;
the first condition is that the average value is larger than or equal to the image difference value comparison threshold value, and the second condition is that the average value is smaller than the image difference value comparison threshold value.
5. The intelligent frame inserting based visual optimization method according to claim 4, wherein in said step S3, feature contours in each video frame of said video segment are determined, wherein,
determining displacement difference amount based on the contour center coordinates of the same object contour in adjacent video frames, comparing the displacement difference amount with a preset first displacement difference comparison threshold value, and determining whether the object contour in the video frame is a characteristic contour according to a comparison result, wherein,
under the first displacement contrast condition, determining the object contour in the video frame as a characteristic contour;
the first displacement comparison condition is that the displacement difference amount is larger than or equal to the first displacement difference comparison threshold value.
6. The intelligent frame inserting-based visual optimization method according to claim 5, wherein in the step S3, the displacement difference is determined based on the coordinates of the contour center of the same object contour in the adjacent video frames, wherein a rectangular coordinate system is established by taking the center point of each video frame as the origin, the coordinates of the contour center point corresponding to the same object contour in the adjacent video frames are determined, the displacement difference D of the object contour in the adjacent video frames is calculated according to the formula (2),
Figure QLYQS_2
in the formula (2), Y2 represents a Y-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, Y1 represents a Y-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames, X2 represents an X-axis coordinate value of a contour center point of the object contour in a subsequent video frame among the adjacent video frames, and X1 represents an X-axis coordinate value of a contour center point of the object contour in a previous video frame among the adjacent video frames.
7. The intelligent frame inserting-based visual optimization method according to claim 6, wherein in the step S3, the number of frames to be inserted is adjusted based on the displacement difference of the feature contours in each adjacent video frame in the video segment, wherein,
comparing displacement difference amounts corresponding to the object contours determined as the characteristic contours in the adjacent video frames with a preset second displacement difference comparison threshold and a third displacement difference comparison threshold, wherein the first displacement difference comparison threshold is smaller than the second displacement difference comparison threshold and the second displacement difference comparison threshold is smaller than the third displacement difference comparison threshold, and adjusting the frame inserting number of the adjacent video frames according to comparison results,
under the second displacement comparison condition, increasing the number of the inserted frames to insert frames into the adjacent video frames;
under the third displacement comparison condition, reducing the number of the inserted frames to insert frames of the adjacent video frames;
under the fourth displacement comparison condition, the number of the inserted frames is not required to be adjusted;
the second displacement comparison condition is that the displacement difference amount is larger than or equal to the third displacement difference comparison threshold value, the third displacement comparison condition is that the displacement difference amount is larger than or equal to the first displacement difference comparison threshold value and the displacement difference amount is smaller than or equal to the second displacement difference comparison threshold value, and the fourth displacement comparison condition is that the displacement difference amount is larger than the second displacement difference comparison threshold value and the displacement difference amount is smaller than the third displacement difference comparison threshold value.
8. The intelligent frame inserting-based visual optimization method according to claim 1, wherein in the step S3, an image brightness characterization value of each video frame of the video segment is determined, wherein,
the image intensity characterization value E of the video frame is calculated according to equation (3),
Figure QLYQS_3
in the formula (3), R represents a contrast value of the video frame, B represents a brightness value of the video frame, R0 represents a preset contrast parameter, and B0 represents a preset brightness parameter.
9. The intelligent frame inserting-based visual optimization method according to claim 8, wherein in the step S3, the number of frames to be inserted is adjusted based on the difference of the image brightness characterization values in each adjacent video frame in the video segment, wherein,
determining the difference of the image brightness representation values of all video frames in adjacent video frames, comparing the difference with a preset first graph brightness difference comparison threshold value and a second graph brightness difference comparison threshold value,
under the comparison result of the first brightness characterization value, increasing the number of the inserted frames to insert frames of the adjacent video frames;
reducing the number of the inserted frames to insert frames into the adjacent video frames under the comparison result of the second brightness characterization value;
under the comparison result of the third brightness representation value, the number of the inserted frames is not required to be adjusted;
the first brightness representation value comparison result is that the difference amount is larger than or equal to the second graph brightness difference comparison threshold value, the second brightness representation value comparison result is that the difference amount is smaller than or equal to the first graph brightness difference comparison threshold value, the third brightness representation value comparison result is that the difference amount is larger than the first graph brightness difference comparison threshold value and the difference amount is smaller than the second graph brightness difference comparison threshold value, and the first graph brightness difference comparison threshold value is smaller than the second graph brightness difference comparison threshold value.
10. The intelligent frame inserting-based visual optimization method according to claim 1, wherein the step S3 includes inserting an insertion frame between adjacent video frames when inserting frames of the adjacent video frames, the insertion frame being generated based on an insertion frame model.
CN202310538891.5A 2023-05-15 2023-05-15 Visual optimization method based on intelligent frame insertion Active CN116260928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310538891.5A CN116260928B (en) 2023-05-15 2023-05-15 Visual optimization method based on intelligent frame insertion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310538891.5A CN116260928B (en) 2023-05-15 2023-05-15 Visual optimization method based on intelligent frame insertion

Publications (2)

Publication Number Publication Date
CN116260928A true CN116260928A (en) 2023-06-13
CN116260928B CN116260928B (en) 2023-07-11

Family

ID=86682844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310538891.5A Active CN116260928B (en) 2023-05-15 2023-05-15 Visual optimization method based on intelligent frame insertion

Country Status (1)

Country Link
CN (1) CN116260928B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116723355A (en) * 2023-08-11 2023-09-08 深圳传趣网络技术有限公司 Video frame inserting processing method, device, equipment and storage medium
CN116847126A (en) * 2023-07-20 2023-10-03 北京富通亚讯网络信息技术有限公司 Video decoding data transmission method and system
CN117132936A (en) * 2023-08-31 2023-11-28 北京中电拓方科技股份有限公司 Data carding and data access system of coal plate self-building system
CN117651148A (en) * 2023-11-01 2024-03-05 广东联通通信建设有限公司 Terminal management and control method for Internet of things

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101058A1 (en) * 2002-11-22 2004-05-27 Hisao Sasai Device, method and program for generating interpolation frame
EP1863283A1 (en) * 2006-05-31 2007-12-05 Vestel Elektronik Sanayi ve Ticaret A.S. A method and apparatus for frame interpolation
JP2009206940A (en) * 2008-02-28 2009-09-10 Toshiba Corp Interpolation frame generation circuit and frame interpolation apparatus
CN102123235A (en) * 2011-03-24 2011-07-13 杭州海康威视软件有限公司 Method and device for generating video interpolation frame
EP2701386A1 (en) * 2012-08-21 2014-02-26 MediaTek, Inc Video processing apparatus and method
CN111641828A (en) * 2020-05-16 2020-09-08 Oppo广东移动通信有限公司 Video processing method and device, storage medium and electronic equipment
WO2021006146A1 (en) * 2019-07-10 2021-01-14 ソニー株式会社 Image processing device and image processing method
CN113766275A (en) * 2021-09-29 2021-12-07 北京达佳互联信息技术有限公司 Video editing method, device, terminal and storage medium
CN114205648A (en) * 2021-12-07 2022-03-18 网易(杭州)网络有限公司 Frame interpolation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101058A1 (en) * 2002-11-22 2004-05-27 Hisao Sasai Device, method and program for generating interpolation frame
EP1863283A1 (en) * 2006-05-31 2007-12-05 Vestel Elektronik Sanayi ve Ticaret A.S. A method and apparatus for frame interpolation
JP2009206940A (en) * 2008-02-28 2009-09-10 Toshiba Corp Interpolation frame generation circuit and frame interpolation apparatus
CN102123235A (en) * 2011-03-24 2011-07-13 杭州海康威视软件有限公司 Method and device for generating video interpolation frame
EP2701386A1 (en) * 2012-08-21 2014-02-26 MediaTek, Inc Video processing apparatus and method
WO2021006146A1 (en) * 2019-07-10 2021-01-14 ソニー株式会社 Image processing device and image processing method
CN111641828A (en) * 2020-05-16 2020-09-08 Oppo广东移动通信有限公司 Video processing method and device, storage medium and electronic equipment
CN113766275A (en) * 2021-09-29 2021-12-07 北京达佳互联信息技术有限公司 Video editing method, device, terminal and storage medium
CN114205648A (en) * 2021-12-07 2022-03-18 网易(杭州)网络有限公司 Frame interpolation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹原周汉;滕奇志;何小海;: "基于单双向结合运动估计的帧率提升算法", 计算机与数字工程, no. 04 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116847126A (en) * 2023-07-20 2023-10-03 北京富通亚讯网络信息技术有限公司 Video decoding data transmission method and system
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116708789B (en) * 2023-08-04 2023-10-13 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116723355A (en) * 2023-08-11 2023-09-08 深圳传趣网络技术有限公司 Video frame inserting processing method, device, equipment and storage medium
CN116723355B (en) * 2023-08-11 2023-11-28 深圳传趣网络技术有限公司 Video frame inserting processing method, device, equipment and storage medium
CN117132936A (en) * 2023-08-31 2023-11-28 北京中电拓方科技股份有限公司 Data carding and data access system of coal plate self-building system
CN117651148A (en) * 2023-11-01 2024-03-05 广东联通通信建设有限公司 Terminal management and control method for Internet of things

Also Published As

Publication number Publication date
CN116260928B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN116260928B (en) Visual optimization method based on intelligent frame insertion
JP7395577B2 (en) Motion smoothing of reprojected frames
JP5536115B2 (en) Rendering of 3D video images on stereoscopic display
US6940505B1 (en) Dynamic tessellation of a base mesh
CN110049351B (en) Method and device for deforming human face in video stream, electronic equipment and computer readable medium
US11238569B2 (en) Image processing method and apparatus, image device, and storage medium
WO2020140728A1 (en) Image processing method, image processing apparatus and display apparatus
US6144387A (en) Guard region and hither plane vertex modification for graphics rendering
US20070159486A1 (en) Techniques for creating facial animation using a face mesh
CN111292236B (en) Method and computing system for reducing aliasing artifacts in foveal gaze rendering
JP4810249B2 (en) Image display device and luminance range correction method
US10650507B2 (en) Image display method and apparatus in VR device, and VR device
CN105574817A (en) Image anti-aliasing method and apparatus
US8766974B2 (en) Display apparatus and method
JP4468631B2 (en) Texture generation method and apparatus for 3D face model
US20120114267A1 (en) Method of enhancing contrast using bezier curve
CN106716499B (en) Information processing apparatus, control method, and program
US11120614B2 (en) Image generation apparatus and image generation method
CN111833262A (en) Image noise reduction method and device and electronic equipment
US6590582B1 (en) Clipping processing method
CN112686978A (en) Expression resource loading method and device and electronic equipment
CN103139524A (en) Video optimization method and information processing equipment
CN116173496A (en) Image frame rendering method and related device
CN111816117A (en) Method for adjusting picture brightness of display panel and display device
CN113436126B (en) Image saturation enhancement method and system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant