CN111277779B - Video processing method and related device - Google Patents

Video processing method and related device Download PDF

Info

Publication number
CN111277779B
CN111277779B CN202010149267.2A CN202010149267A CN111277779B CN 111277779 B CN111277779 B CN 111277779B CN 202010149267 A CN202010149267 A CN 202010149267A CN 111277779 B CN111277779 B CN 111277779B
Authority
CN
China
Prior art keywords
video
image
frame
mode
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010149267.2A
Other languages
Chinese (zh)
Other versions
CN111277779A (en
Inventor
胡杰
林文真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010149267.2A priority Critical patent/CN111277779B/en
Publication of CN111277779A publication Critical patent/CN111277779A/en
Priority to PCT/CN2021/074417 priority patent/WO2021175055A1/en
Application granted granted Critical
Publication of CN111277779B publication Critical patent/CN111277779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a video processing method and a related device thereof, which are applied to a terminal, wherein the method comprises the following steps: acquiring a video to be processed of a camera application in a preview state; dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer; performing video frame interpolation on the first video to obtain a third video; synthesizing the first data of the third video and the second data of the second video into target video data; and storing the target video data. The effect of video frame insertion of the camera application in the preview state is improved, and the video quality in the preview state is improved.

Description

Video processing method and related device
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a video processing method and a related apparatus.
Background
With the rapid development of high-end multimedia information systems, users have higher and higher requirements on visual experience. The existing video can not meet the visual demand of people. At present, when some camera devices or cameras of some terminal devices are applied in a preview state, a frame skipping phenomenon easily occurs to a video, and a video frame rate in the preview state is unstable, so that the phenomena of unclear image quality and jitter of display screens of the terminal devices such as mobile phones during playing are caused.
Disclosure of Invention
The embodiment of the application provides a video processing method and a related device, which are used for improving the effect of video frame insertion of a camera application in a preview state and improving the video quality in the preview state.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
acquiring a video to be processed of a camera application in a preview state;
dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer;
performing video frame interpolation on the first video to obtain a third video;
synthesizing the first data of the third video and the second data of the second video into target video data;
and storing the target video data.
In a second aspect, an embodiment of the present application provides a video processing apparatus, where the apparatus includes: a communication unit and a processing unit, wherein,
the processing unit is used for acquiring a video to be processed when the camera application is in a preview state; the video to be processed is divided into a first video of a video layer and a second video of a User Interface (UI) layer; the video interpolation device is used for performing video interpolation on the first video to obtain a third video; and means for compositing the first data of the third video and the second data of the second video into target video data; and for storing the target video data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the video processing method described in the embodiment of the present application, a to-be-processed video in a preview state of a camera application is obtained; dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer; performing video frame interpolation on the first video to obtain a third video; synthesizing the first data of the third video and the second data of the second video into target video data; and storing the target video data. According to the method and the device, the video to be processed in the preview state is divided into the video of the video layer and the video of the user interface layer, and the video of the video layer is subjected to frame interpolation, so that the frame interpolation efficiency is improved, and the problem that the video frame rate in the preview state applied to the camera is unstable is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of video interpolation in video processing according to an embodiment of the present application;
fig. 2a is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 2b is a schematic diagram of a software block diagram of a video frame interpolation provided in an embodiment of the present application;
fig. 2c is a schematic diagram of a hardware block diagram of a video frame interpolation provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a block diagram of functional units of a video processing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic diagram of a video frame interpolation in video processing according to an embodiment of the present disclosure. Inserting N frame images in a group of adjacent frame images in the first video, wherein N is a positive integer and is equal to 1.
In order to solve the above problem, an embodiment of the present application provides a video processing method, as shown in fig. 2a specifically, the method may include, but is not limited to, the following steps:
s201, the terminal acquires a to-be-processed video of the camera application in a preview state.
The terminal may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. The terminal equipment comprises a mobile phone, a tablet personal computer, a notebook computer, intelligent wearable equipment and the like.
The video to be processed may be a video in a preview state of a camera application stored in the memory of the terminal, or a video in a preview state directly acquired by the terminal from the camera application.
The video to be processed may be a layer separation video of the terminal camera application in a preview state; the video of the overlay video application in the preview state of the terminal camera application may be used.
The video to be processed may be a static video, a dynamic video, a relatively still video, or a relatively moving video. The number of the videos to be processed can be one or more.
S202, the terminal divides the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer.
Before the to-be-processed video is divided into a first video of a video layer and a second video of a User Interface (UI) layer, identifying the first video of the video layer with the layer format of YCBCR _420 in the to-be-processed video in a management device server (surfacemaker), and identifying the second video of the UI layer of the user interface with the layer format of RGBA8888 in the to-be-processed video in a SurfaceFinger. The second video of the UI layer includes an On-Screen Display (OSD) of the UI layer application function in the camera application, such as a focusing frame, a face recognition frame, a specific object recognition frame, a progress bar, and a beauty of the camera application to the image.
S203, the terminal carries out video frame insertion on the first video to obtain a third video.
In a possible implementation manner, the terminal determines at least one frame of image to be inserted according to a Motion Estimation and Motion Compensation (MEMC) algorithm, and performs video frame insertion on the first video according to the at least one frame of image to be inserted to obtain a third video.
Wherein, it should be further explained that the motion estimation algorithm in the MEMC algorithm includes a match search algorithm, at least one of the following: a method based on a streamer equation, a Block Matching method (BMA), a pixel recursion method (PRA), a bayes method. Wherein the BMA algorithm comprises at least one of: a two-digit logarithmic search method, a three-step search method, a four-step search method, a diamond search method and a hexagon search method.
In a possible implementation manner, before the terminal performs video frame insertion on the first video to obtain a third video, the method further includes: the terminal carries out preprocessing operation on the video to be processed in the preview state of the camera application.
Fig. 2b is a software block diagram provided in the embodiment of the present application. As shown in fig. 2b, the specific implementation steps of performing the pre-processing operation on the to-be-processed video in the preview state of the camera application may be: the method comprises the steps that a video to be processed in a preview state in a camera application is divided into first video data of a video layer of a preview layer and second video data of a UI layer of a control layer according to different layers, and the first video data and the second video data are used for carrying out function selection, frame rate detection, layered detection, pop-up frame detection, electric quantity detection and switch detection on the video to be processed through a list and a front-end strategy; then, a Display Device is transparently operated to the lower part through the service code of a management equipment server (SurfaceFlinger); and then, judging whether layering exists or not through middleware codes for controlling Display of a Hardware abstraction layer (HWC), and finally, performing Display Driver (Display Driver) on one or two next Display devices according to whether layering exists or not.
In a possible implementation manner, the terminal performs frame interpolation on each group of adjacent frame images in at least one group of adjacent frame images in the first video through the chip.
Fig. 2c is a hardware block diagram provided in this embodiment of the present application. As shown in fig. 2c, the pre-processed video is divided into two paths, i.e., a first video (video) of the video layer and a second video of the UI layer. A first video (video) of a video Layer and a second video of a UI Layer enter a Crossbar (Crossbar) through a Video Interworking Gateway (VIG) or a Smart Direct Memory Access (SDMA), sequentially pass through a Layer Mixer (LM), a Local Tone Mapper (LTM), and a Display Layer post-processing unit (DSPP), and perform frame insertion processing on the videos through a Display Stream Compression (DSC), the Crossbar, a Display communication Interface (DSI), and an image Display chip (PixelWorks) to obtain a third video, where the image Display chip may be PixelWorks. The method comprises the steps of carrying out frame interpolation on a first video in an image display chip, synthesizing target video data by using an encoder to synthesize data of a third video subjected to frame interpolation and data of a second video of a UI layer, and storing the target video data into a memory.
In one possible embodiment, a sampling period of a collected image of the camera is determined; determining the maximum screen refresh rate of the display screen; determining a first target frame rate in the third video according to the sampling period and the maximum screen refresh rate of the display screen; and performing video frame interpolation on the first video according to the first target frame rate.
For example, the terminal determines a sampling period of an acquired image of a camera, determines that a first frame rate in a first video is 60fps according to the sampling period, determines that a maximum screen refresh rate of a display screen is 120fps according to a first corresponding relationship, determines that a first target frame rate is 120fps, and performs video interpolation on the first video according to the first target frame rate. The first corresponding relation is the corresponding relation between a first frame rate and a maximum screen refresh rate and the first target frame rate. The first corresponding relationship may be a maximum screen refresh rate as the first target frame rate, may be a first corresponding relationship set by a terminal, and may be a first corresponding relationship set by a network device.
In one possible embodiment, a first frame rate of the first video is determined; determining the maximum screen refresh rate of the display screen; and determining the frame number of at least one frame image to be inserted in each group of adjacent frame images in at least one group of adjacent frame images according to the first frame rate and the maximum screen refresh rate, and performing video frame insertion on the first video.
For example, when the terminal determines that the first frame rate in the first video is 60fps and the maximum screen refresh rate of the display screen is 120fps, the terminal determines that the first target frame rate is 120fps according to the first corresponding relationship, and performs video frame interpolation on the first video according to the first target frame rate.
And S204, the terminal synthesizes the first data of the third video and the second data of the second video into target video data.
In a specific implementation, the terminal may synthesize the first data of the third video and the second data of the second video into target video data through an encoder.
And S205, storing the target video data.
The target video may be a static video, may be a dynamic video, may be a relatively static video, and may be a relatively moving video. The number of the videos to be processed can be one or more.
It can be seen that, in the video processing method described in the embodiment of the present application, a to-be-processed video in a preview state of a camera application is obtained; dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer; performing video frame interpolation on the first video to obtain a third video; synthesizing the first data of the third video and the second data of the second video into target video data. According to the method and the device, the video to be processed in the preview state is divided into the video of the video layer and the video of the user interface layer, frame insertion is carried out on the video of the video layer, the video quality in the preview state is improved, and the problem that the video frame rate in the preview state applied to the camera is unstable is solved.
In one possible example, before the dividing the to-be-processed video into a first video of a video layer and a second video of a user interface UI layer, the method further includes: and judging that the first frame rate of the first video is less than or equal to a preset frame rate.
Wherein the preset frame rate may be a frame rate greater than or equal to 75fps and less than or equal to 120fps, and the preset frame rate may be a frame rate of 75fps, 77fps, 80fps, 85fps, 89fps, 82fps, 90fps, 91fps, 96fps, 93fps, 100fps, 110fps, 120fps, 105fps, 103fps, 108fps, 114fps, 118fps, and the like
In a specific implementation, the specific determination manner of the first frame rate may be: the terminal determines a first frame rate of the first video by acquiring a sampling period of an acquired image of the camera, which may be that the terminal calculates the first frame rate according to a frame number and a duration of the first video.
In one possible example, the video frame interpolation on the first video to obtain a third video includes: determining at least one shot target in the first video; determining a motion state of each of the at least one photographed target; and performing video frame interpolation on the first video according to the motion state of each shot target.
The motion state of each of the at least one object may be a stationary state with a motion speed of 0, may be a relatively stationary state, may be a motion state with a motion speed of not 0, and may be a relative motion state.
In specific implementation, at least one shot target in the first video is determined; determining a reference object, and determining the motion state of each photographed target in the at least one photographed target according to the reference object; determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the motion state of each shot target; and performing video interpolation on the first video according to the at least one frame image.
The at least one frame image to be inserted between at least one group of adjacent frame images in the first video is determined according to the motion state of each shot object, and the at least one frame image to be inserted between at least one group of adjacent frame images in the first video can be determined according to the MEMC algorithm.
The frame number of the at least one frame of image is determined by the following specific implementation steps: the terminal determines a first frame rate of the first video; determining the maximum screen refresh rate of the display screen; and determining the number of frames of at least one frame image to be inserted in each group of adjacent frame images in at least one group of adjacent frame images according to the first frame rate and the maximum screen refresh rate.
The frame number of the at least one frame of image is determined by the following specific implementation steps: the terminal determines the sampling period of the collected image of the camera; determining the maximum screen refresh rate of the display screen; determining a first target frame rate in the third video according to the sampling period and the maximum screen refresh rate of the display screen; and determining the frame number of at least one frame image to be inserted in each group of adjacent frame images in at least one group of adjacent frame images according to the first target frame rate.
For example, the terminal determines that the vehicle in the first video is a shot target; determining the ground as a reference object, and determining the motion state of the vehicle as a static state according to the ground; determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the motion state of the vehicle; and performing video interpolation on the first video according to the at least one frame image.
It can be seen that at least one photographed target in the first video is determined in this example; determining a motion state of each of the at least one photographed target; and performing video frame interpolation on the first video according to the motion state of each shot target, improving the video quality in a preview state, and solving the problem of unstable video frame rate of a camera application in the preview state.
In one possible example, the video-interpolating the first video according to the motion state of each photographed target includes: determining that the terminal is in a static shooting state according to the motion state of each shot target; determining the motion track and the motion speed of each shot target according to the motion state of each shot target; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to each shot target, the motion track and the motion speed of each shot target, and performing video frame insertion on the first video.
The motion state of each object may be a stationary state with a motion speed of 0, may be a relatively stationary state, may be a motion state with a motion speed of not 0, and may be a relative motion state.
In a specific implementation, the determining that the terminal is in the static shooting state according to the motion state of each shot target may be determined by the following specific steps: and determining one shooting target of at least one shooting target as a reference object according to the motion state of each shooting target, and determining that the terminal is in a static shooting state according to the reference object.
In a specific implementation, determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to each shot target, the motion trajectory and the motion speed of each shot target, and performing video frame insertion on the first video may be: and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images according to each shot target, the motion track and the motion speed of each shot target through an MEMC algorithm.
It can be seen that, in this example, the terminal is determined to be in a static shooting state according to the motion state of each shot target; determining the motion track and the motion speed of each shot target according to the motion state of each shot target; determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to each shot target, the motion track and the motion speed of each shot target, and performing video frame insertion on the first video, so that the frame insertion efficiency is improved, the video quality in a preview state is improved, and the problem that the video frame rate in the preview state applied by a camera is unstable is solved.
In one possible example, the video-interpolating the first video according to the motion state of each photographed target includes: determining that the terminal is in a dynamic shooting state according to the motion state of each shot target; determining the moving speed and the stability degree of the terminal; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the moving speed and the stability degree of the terminal, and performing video frame insertion on the first video.
Wherein the moving speed includes: the magnitude of the moving speed and the direction of the moving speed can be determined according to a reference object determined by the terminal. The stability level is used to indicate a steady state when the terminal is photographing.
In specific implementation, the terminal is determined to be in a dynamic shooting state according to the motion state of each shot target; determining the moving speed and the stability degree of the terminal; and determining the position of an object in at least one frame image to be inserted between at least one group of adjacent frame images in the first video in the image according to the moving speed of the terminal, the moving speed direction and the stability degree, and performing video frame insertion on the first video.
It can be seen that, in this example, the terminal is determined to be in the dynamic shooting state according to the motion state of each shot target; determining the moving speed and the stability degree of the terminal; determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the moving speed and the stability degree of the terminal, performing video frame insertion on the first video, improving the quality of the frame image to be inserted, enabling the video to be smoothly played, improving the quality of the video in a preview state, and solving the problem that the video frame rate of a camera application in the preview state is unstable.
In one possible example, the video frame interpolation on the first video to obtain a third video includes: and performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video.
Wherein the mode in the camera application may be any one of: short video mode, slow motion mode, video recording mode, photographing mode, portrait mode, night scene mode, panoramic mode, professional mode, beauty mode, flash lamp adjusting mode, filter switching mode and depth of field mode.
In a specific implementation, the terminal may determine a parameter of a mode according to the mode in the camera application under the condition that the mode in the camera application is not changed, determine at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameter of the mode, and perform video frame insertion on the first video.
Wherein the parameters include: camera parameters and image parameters in the first video. The camera parameters include: the model of the camera and the magnification of the lens. The type of the camera can be any one of the following types: double camera, single camera, wide-angle camera, tele camera. The image parameters include at least one of: resolution of the image, image brightness, image sharpness.
For example, in a video recording mode, determining a camera parameter and/or an image parameter in the first video in the video recording mode; and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the camera parameter and/or the image parameter in the first video, and performing video frame insertion on the first video.
In a specific implementation, the terminal may determine a parameter of the first mode and a parameter of the second mode according to the first mode and the second mode under the condition of mode switching in the camera application, determine at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameter of the first mode and the parameter of the second mode, and perform video frame insertion on the first video.
It can be seen that in this example, according to the mode in the camera application, video interpolation is performed on the first video to obtain the third video, so that the frame interpolation efficiency is improved, the video quality in the preview state is improved, and the problem that the video frame rate of the camera application in the preview state is unstable is solved.
In one possible example, the video-inserting the first video according to the mode in the camera application to obtain the third video includes: determining parameters of a mode in the camera application according to the mode, wherein the parameters comprise camera parameters and/or image quality parameters; and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameters of the mode, and performing video frame insertion on the first video.
Wherein the camera parameters include at least one of: the model of the camera and the magnification of the lens. The model of the camera may be one of: double camera, single camera, wide-angle camera, tele camera. The image quality parameter comprises at least one of: resolution of the image, image brightness, image sharpness.
In specific implementation, according to a mode in the camera application, determining camera parameters of the mode; determining the model of the camera and the lens magnification according to the camera parameters; and determining the size of an object in each frame image in at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the model of the camera and the magnification of the lens, and performing video frame insertion on the first video.
In a specific implementation, according to a mode in the camera application, an image quality parameter of the mode is determined, where the image quality parameter includes at least one of: the resolution, the image brightness and the image definition of the image; and determining the image quality parameter of at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the image quality parameter of the mode, and performing video frame insertion on the first video.
In a specific implementation, according to a mode in the camera application, determining an image quality parameter of the mode, where the image quality parameter includes a resolution of an image; determining the resolution of the image according to the image quality parameters of the mode; and determining the frame number of at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the image resolution, and performing video frame insertion on the first video.
For example, according to a mode in the camera application, determining a resolution of an image of the mode; judging whether the image resolution is smaller than a first image resolution or not; and determining the frame number of at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the resolution of the first image, and performing video frame insertion on the first video. The first image resolution may be a preset image resolution, may be an image resolution set by the terminal, and may be an image resolution set by the network device.
In a specific implementation, according to a mode in the camera application, parameters of the mode are determined, where the parameters include a camera parameter and an image quality parameter, and the camera parameter includes at least one of: the model of the camera, the magnification of the lens, the image quality parameter includes at least one of the following: the resolution, the image brightness and the image definition of the image; and determining the object size and the image quality parameter of at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the camera parameter and the image quality parameter of the mode, and performing video frame insertion on the first video.
It can be seen that in this example, in the same mode, according to the mode in the camera application, parameters of the mode are determined; according to the mode parameters, at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video is determined, video frame insertion is carried out on the first video to improve the quality of the frame image to be inserted, so that the video is smoothly played, the video quality in a preview state is improved, and the problem that the video frame rate in the preview state applied by a camera is unstable is solved.
In one possible example, the video-inserting the first video according to the mode in the camera application to obtain the third video includes: the first video comprises a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set comprises at least one frame of first image, and the second image set comprises at least one frame of second image; the performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video includes: determining the difference of the settings of the parameters of the first mode and the parameters of the second mode, wherein the parameters comprise camera parameters and/or image quality parameters; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
Wherein the camera parameters include at least one of: the model of the camera and the magnification of the lens. The model of the camera may be one of: double camera, single camera, wide-angle camera, tele camera. The image quality parameter comprises at least one of: resolution of the image, image brightness, image sharpness.
In a specific implementation, the first video includes a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set includes at least one frame of first image, and the second image set includes at least one frame of second image; determining the difference of the settings of the camera parameters in the first mode and the camera parameters in the second mode; determining, according to the set difference, to perform frame interpolation processing on a first group of adjacent image frames of the first video to obtain a third image set, where the first group of adjacent image frames includes a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
It should be further explained that, the specific implementation steps for determining the setting difference between the camera parameters in the first mode and the camera parameters in the second mode may be: determining a first lens magnification factor according to the camera parameters of the first mode; determining a second lens magnification according to the camera parameters of the second mode; determining a difference between the camera parameters of the first mode and the camera parameters of the second mode; and determining the lens magnification factor in each frame image of the at least one frame image to be inserted according to the frame number, the difference, the first lens magnification factor and the second lens magnification factor of the at least one frame image to be inserted into a first group of adjacent image frames of the first video.
In a specific implementation, the first video includes a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set includes at least one frame of first image, and the second image set includes at least one frame of second image; determining difference of settings of the image quality parameters of the first mode and the second mode; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the image quality parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the image quality parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
It should be further explained that, the specific implementation steps for determining the difference between the settings of the image quality parameter of the first mode and the image quality parameter of the second mode may be: determining a first resolution according to a first image quality parameter of the first mode; determining a second resolution according to a second image quality parameter of the second mode; determining a resolution difference between the first resolution and the second resolution; determining the resolution in each frame image of at least one frame image to be inserted according to the frame number, the resolution difference, the first resolution and the second resolution of at least one frame image to be inserted in a first group of adjacent image frames of a first video; and determining each frame of image according to the resolution in each frame of image.
It should be further explained that, the specific implementation steps for determining the difference between the settings of the image quality parameter of the first mode and the image quality parameter of the second mode may be: determining a first image brightness according to a first image quality parameter of the first mode; determining a second image brightness according to a second image quality parameter of the second mode; determining an image brightness difference between the first image brightness and the second image brightness; determining the image brightness in each frame image of at least one frame image to be inserted according to the frame number of at least one frame image to be inserted into a first group of adjacent image frames of a first video, the image brightness difference value, the first image brightness and the second image brightness; and determining each frame of image according to the image brightness in each frame of image.
It should be further explained that, the specific implementation steps for determining the difference between the settings of the image quality parameter of the first mode and the image quality parameter of the second mode may be: determining a first image definition according to a first image quality parameter of the first mode; determining a second image definition according to a second image quality parameter of the second mode; determining an image sharpness difference between the first image sharpness and the second image sharpness; determining the image definition of each frame of image of at least one frame of image to be inserted according to the frame number of the at least one frame of image to be inserted into a first group of adjacent image frames of a first video, the image definition difference, the first image definition and the second image definition;
it should be further explained that the specific implementation step of determining the difference between the settings of the image quality parameter in the first mode and the image quality parameter in the second mode may be any combination of the above specific implementation methods of setting the difference, and redundant description is omitted here.
In a specific implementation, the first video includes a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set includes at least one frame of first image, and the second image set includes at least one frame of second image; determining the difference of the settings of the parameters of the first mode and the parameters of the second mode, wherein the parameters comprise camera parameters and image quality parameters; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
Wherein it is further explained that the setting difference of the parameters of the first mode and the parameters of the second mode is determined, and the parameters comprise camera parameters and image quality parameters; the specific method for performing frame interpolation processing on the first group of adjacent image frames of the first video according to the setting difference to obtain the third image set may be a combination of the two embodiments, and redundant description is omitted here.
The method for performing video interpolation on the first video according to the mode in the camera application to obtain the third video may be applied to a scene where two or more modes are switched, and the method may be an overlay processing of the method for performing video interpolation on the first video according to the mode in the camera application in the scene where the two modes are switched, and is not described in detail here.
It can be seen that, in this example, by determining the difference in the settings of the parameters of the first mode and the parameters of the second mode; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set; the parameters of the first mode are used for frame interpolation processing aiming at a first image set; the parameters of the second mode are used for performing frame interpolation processing on the second image set, different groups of adjacent image frames in the first video are processed in a classified mode, the quality of frame images to be interpolated is improved, the video is smoothly played, the video quality in a preview state is improved, and the problem that the video frame rate in the preview state applied by a camera is unstable is solved.
Referring to fig. 3, in accordance with the embodiment shown in fig. 2a, fig. 3 is a schematic flowchart of a video processing method according to an embodiment of the present application, where the video processing method includes:
s301, the terminal acquires a to-be-processed video of the camera application in a preview state;
s302, the terminal divides the video to be processed into a first video of a video layer and a second video of a user interface UI layer;
s303, the terminal determines at least one shot target in the first video;
s304, the terminal determines the motion state of each shot target in the at least one shot target;
s305, the terminal carries out video frame insertion on the first video according to the motion state of each shot target.
S306, the terminal synthesizes the first data of the third video and the second data of the second video into target video data;
and S307, the terminal stores the target video data.
It can be seen that, in the embodiment of the application, the to-be-processed video of the camera application in the preview state is acquired; dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer; determining at least one shot target in the first video; determining a motion state of each of the at least one photographed target; performing video frame interpolation on the first video according to the motion state of each shot target; synthesizing the first data of the third video and the second data of the second video into target video data; and storing the target video data. The quality of the frame image to be inserted is improved, so that the video is smoothly played, the video quality in a preview state is improved, and the problem of unstable video frame rate of a camera application in the preview state is solved.
In one possible example, please refer to fig. 4, fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, where the electronic device 400 may be the terminal. As shown in fig. 4, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, wherein the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for performing the steps of:
acquiring a video to be processed of a camera application in a preview state;
dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer;
performing video frame interpolation on the first video to obtain a third video;
synthesizing the first data of the third video and the second data of the second video into target video data;
and storing the target video data.
In one possible example, the one or more programs 421 are specifically configured to perform the steps of: before the dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer, the method further comprises the following steps: and judging that the first frame rate of the first video is less than or equal to a preset frame rate.
In one possible example, in the aspect of performing video interpolation on the first video to obtain a third video, the one or more programs 421 are specifically configured to perform the following steps: determining at least one shot target in the first video; determining a motion state of each of the at least one photographed target; and performing video frame interpolation on the first video according to the motion state of each shot target.
In one possible example, in the aspect of video-frame-interpolation of the first video according to the motion state of each photographed object, the one or more programs 421 are specifically configured to perform the following steps: determining that the terminal is in a static shooting state according to the motion state of each shot target; determining the motion track and the motion speed of each shot target according to the motion state of each shot target; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to each shot target, the motion track and the motion speed of each shot target, and performing video frame insertion on the first video.
In one possible example, in the aspect of video-frame-interpolation of the first video according to the motion state of each photographed object, the one or more programs 421 are specifically configured to perform the following steps: determining that the terminal is in a dynamic shooting state according to the motion state of each shot target; determining the moving speed and the stability degree of the terminal; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the moving speed and the stability degree of the terminal, and performing video frame insertion on the first video.
In one possible example, in the video-interpolation of the first video to obtain a third video, the one or more programs 421 are further configured to perform the following steps: and performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video.
In one possible example, in the aspect of video-interpolating the first video according to the mode in the camera application to obtain the third video, the one or more programs 421 are further configured to perform the following steps: determining parameters of a mode in the camera application according to the mode, wherein the parameters comprise camera parameters and/or image quality parameters; and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameters of the mode, and performing video frame insertion on the first video.
In one possible example, in the aspect of video-interpolating the first video according to the mode in the camera application to obtain the third video, the one or more programs 421 are further configured to perform the following steps: the first video comprises a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set comprises at least one frame of first image, and the second image set comprises at least one frame of second image; the performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video includes: determining the difference of the settings of the parameters of the first mode and the parameters of the second mode, wherein the parameters comprise camera parameters and/or image quality parameters; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
It can be seen that, in the video processing method and the related apparatus in the embodiment of the present application, a to-be-processed video in a preview state of a camera application is obtained; dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer; performing video frame interpolation on the first video to obtain a third video; synthesizing the first data of the third video and the second data of the second video into target video data; and storing the target video data. According to the method and the device, the video to be processed in the preview state is divided into the video of the video layer and the video of the user interface layer, and the video of the video layer is subjected to frame interpolation, so that the frame interpolation efficiency is improved, and the problem that the video frame rate in the preview state applied to the camera is unstable is solved.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a block diagram of functional units of a video processing apparatus 500 according to an embodiment of the present application. The video processing apparatus 500 includes: communication unit 501, processing unit 502.
The processing unit is used for acquiring a video to be processed when the camera application is in a preview state; the video to be processed is divided into a first video of a video layer and a second video of a User Interface (UI) layer; the video interpolation device is used for performing video interpolation on the first video to obtain a third video; and means for compositing the first data of the third video and the second data of the second video into target video data; and for storing the target video data.
The video Processing apparatus 500 further includes a storage Unit, and the Processing Unit 502 may be a processor or a controller, such as a Central Processing Unit (CPU), a general-purpose processor, or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication unit 501 may be a communication interface, a transceiver, a transceiving circuit, etc., and the storage unit 503 may be a memory. When the processing unit 502 is a processor, the communication unit 501 is a communication interface, and the storage unit 503 is a memory, the terminal according to the embodiment of the present application may be the electronic device shown in fig. 4.
In a possible example, the processing unit is further configured to determine, before the dividing of the to-be-processed video into a first video of a video layer and a second video of a user interface UI layer, that a first frame rate of the first video is less than or equal to a preset frame rate.
In one possible example, in the aspect of performing video interpolation on the first video to obtain a third video, the processing unit is specifically configured to: determining at least one shot target in the first video; determining a motion state of each of the at least one photographed target; and performing video frame interpolation on the first video according to the motion state of each shot target.
In one possible example, in terms of the video-frame interpolation of the first video according to the motion state of each captured object, the processing unit is specifically configured to: determining that the terminal is in a static shooting state according to the motion state of each shot target; determining the motion track and the motion speed of each shot target according to the motion state of each shot target; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to each shot target, the motion track and the motion speed of each shot target, and performing video frame insertion on the first video.
In one possible example, in terms of the video-frame interpolation of the first video according to the motion state of each captured object, the processing unit is specifically configured to: determining that the terminal is in a dynamic shooting state according to the motion state of each shot target; determining the moving speed and the stability degree of the terminal; and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the moving speed and the stability degree of the terminal, and performing video frame insertion on the first video.
In one possible example, in the aspect of performing video interpolation on the first video to obtain a third video, the processing unit is specifically configured to: and performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video.
In a possible example, in the aspect that the video interpolation is performed on the first video according to the mode in the camera application to obtain the third video, the processing unit is specifically configured to: determining parameters of a mode in the camera application according to the mode, wherein the parameters comprise camera parameters and/or image quality parameters; and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameters of the mode, and performing video frame insertion on the first video.
In a possible example, in the aspect that video interpolation is performed on the first video according to a mode in the camera application to obtain the third video, the processing unit is specifically configured to: the first video comprises a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set comprises at least one frame of first image, and the second image set comprises at least one frame of second image; the performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video includes: determining the difference of the settings of the parameters of the first mode and the parameters of the second mode, wherein the parameters comprise camera parameters and/or image quality parameters; performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode; performing frame interpolation on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation on the second image set according to the parameters of the second mode to obtain a fifth image set; and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, Read-only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A video processing method is applied to a terminal, and the method comprises the following steps:
acquiring a video to be processed of a camera application in a preview state;
dividing the video to be processed into a first video of a video layer and a second video of a User Interface (UI) layer;
performing video frame interpolation on the first video to obtain a third video, wherein the number of frames of images inserted in the first video is determined according to the frame rate of the first video and the maximum screen refresh rate of a terminal display screen;
synthesizing the first data of the third video and the second data of the second video into target video data;
and storing the target video data.
2. The method of claim 1, wherein before the separating the to-be-processed video into a first video of a video layer and a second video of a User Interface (UI) layer, further comprising:
and judging that the first frame rate of the first video is less than or equal to a preset frame rate.
3. The method of claim 1, wherein the video-interpolating the first video to obtain a third video comprises:
determining at least one photographed target in the first video;
determining a motion state of each of the at least one photographed target;
and performing video frame interpolation on the first video according to the motion state of each shot target.
4. The method according to claim 3, wherein the video-interpolating the first video according to the motion state of each photographed target comprises:
determining that the terminal is in a static shooting state according to the motion state of each shot target;
determining the motion track and the motion speed of each shot target according to the motion state of each shot target;
and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to each shot target, the motion track and the motion speed of each shot target, and performing video frame insertion on the first video.
5. The method according to claim 3, wherein the video-interpolating the first video according to the motion state of each photographed target comprises:
determining that the terminal is in a dynamic shooting state according to the motion state of each shot target;
determining the moving speed and the stability degree of the terminal;
and determining at least one frame image to be inserted between at least one group of adjacent frame images in the first video according to the moving speed and the stability degree of the terminal, and performing video frame insertion on the first video.
6. The method of claim 1, wherein the video-interpolating the first video to obtain a third video comprises:
and performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video.
7. The method of claim 6, wherein the video-inserting the first video according to the mode in the camera application to obtain the third video comprises:
determining parameters of a mode in the camera application according to the mode, wherein the parameters comprise camera parameters and/or image quality parameters;
and determining at least one frame image to be inserted between each group of adjacent frame images in at least one group of adjacent frame images in the first video according to the parameters of the mode, and performing video frame insertion on the first video.
8. The method of claim 6, wherein the video-inserting the first video according to the mode in the camera application to obtain the third video comprises:
the first video comprises a first image set acquired in a first mode and a second image set acquired after the first mode is switched to a second mode, the first image set comprises at least one frame of first image, and the second image set comprises at least one frame of second image;
the performing video frame interpolation on the first video according to the mode in the camera application to obtain the third video includes:
determining the difference of the settings of the parameters of the first mode and the parameters of the second mode, wherein the parameters comprise camera parameters and/or image quality parameters;
performing frame interpolation processing on a first group of adjacent image frames of the first video according to the set difference to obtain a third image set, wherein the first group of adjacent image frames comprises a first image and a second image, the first image is a last image acquired in the first mode, and the second image is a first image acquired in the second mode;
performing frame interpolation processing on the first image set according to the parameters of the first mode to obtain a fourth image set, and performing frame interpolation processing on the second image set according to the parameters of the second mode to obtain a fifth image set;
and generating a third video after frame insertion according to the third image set, the fourth image set and the fifth image set.
9. A video processing apparatus, applied to a terminal, the apparatus comprising: a communication unit and a processing unit, wherein,
the processing unit is used for acquiring a video to be processed when the camera application is in a preview state; the video to be processed is divided into a first video of a video layer and a second video of a User Interface (UI) layer; the video interpolation device is used for performing video interpolation on the first video to obtain a third video, and the frame number of the image inserted into the first video is determined according to the frame rate of the first video and the maximum screen refresh rate of a terminal display screen; and means for compositing the first data of the third video and the second data of the second video into target video data; and for storing the target video data.
10. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN202010149267.2A 2020-03-05 2020-03-05 Video processing method and related device Active CN111277779B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010149267.2A CN111277779B (en) 2020-03-05 2020-03-05 Video processing method and related device
PCT/CN2021/074417 WO2021175055A1 (en) 2020-03-05 2021-01-29 Video processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010149267.2A CN111277779B (en) 2020-03-05 2020-03-05 Video processing method and related device

Publications (2)

Publication Number Publication Date
CN111277779A CN111277779A (en) 2020-06-12
CN111277779B true CN111277779B (en) 2022-05-06

Family

ID=71000518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010149267.2A Active CN111277779B (en) 2020-03-05 2020-03-05 Video processing method and related device

Country Status (2)

Country Link
CN (1) CN111277779B (en)
WO (1) WO2021175055A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277779B (en) * 2020-03-05 2022-05-06 Oppo广东移动通信有限公司 Video processing method and related device
CN111741266B (en) * 2020-06-24 2022-03-15 北京梧桐车联科技有限责任公司 Image display method and device, vehicle-mounted equipment and storage medium
CN111899680B (en) * 2020-07-14 2023-04-18 青岛海信医疗设备股份有限公司 Display device and setting method thereof
CN114363700A (en) * 2020-10-12 2022-04-15 阿里巴巴集团控股有限公司 Data processing method, data processing device, storage medium and computer equipment
CN112532880B (en) * 2020-11-26 2022-03-11 展讯通信(上海)有限公司 Video processing method and device, terminal equipment and storage medium
CN112565865A (en) * 2020-11-30 2021-03-26 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113835656A (en) * 2021-09-08 2021-12-24 维沃移动通信有限公司 Display method and device and electronic equipment
CN113835657A (en) * 2021-09-08 2021-12-24 维沃移动通信有限公司 Display method and electronic equipment
CN113766275B (en) * 2021-09-29 2023-05-30 北京达佳互联信息技术有限公司 Video editing method, device, terminal and storage medium
CN113596564B (en) * 2021-09-29 2021-12-28 卡莱特云科技股份有限公司 Picture playing method and device
CN114489882B (en) * 2021-12-16 2023-05-19 成都鲁易科技有限公司 Method and device for realizing dynamic skin of browser and storage medium
CN114302209A (en) * 2021-12-28 2022-04-08 维沃移动通信有限公司 Video processing method, video processing device, electronic equipment and medium
CN114339313A (en) * 2021-12-28 2022-04-12 维沃移动通信有限公司 Frame insertion method and device and electronic equipment
CN115442517B (en) * 2022-07-26 2023-07-25 荣耀终端有限公司 Image processing method, electronic device, and computer-readable storage medium
CN115331500A (en) * 2022-08-19 2022-11-11 河南林业职业学院 Design method of ornamental peony cultivation mobile phone platform based on virtual reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203313319U (en) * 2013-06-09 2013-11-27 深圳创维-Rgb电子有限公司 Display system
CN103702059A (en) * 2013-12-06 2014-04-02 乐视致新电子科技(天津)有限公司 Frame rate conversion control method and device
CN109275011A (en) * 2018-09-03 2019-01-25 青岛海信传媒网络技术有限公司 The processing method and processing device of smart television motor pattern switching, user equipment
CN110636375A (en) * 2019-11-11 2019-12-31 RealMe重庆移动通信有限公司 Video stream processing method and device, terminal equipment and computer readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5023893B2 (en) * 2007-08-31 2012-09-12 ソニー株式会社 Display device
KR20090054828A (en) * 2007-11-27 2009-06-01 삼성전자주식회사 Video apparatus for adding gui to frame rate converted video and gui providing using the same
US8045836B2 (en) * 2008-01-11 2011-10-25 Texas Instruments Incorporated System and method for recording high frame rate video, replaying slow-motion and replaying normal speed with audio-video synchronization
US8538233B2 (en) * 2011-08-24 2013-09-17 Disney Enterprises, Inc. Automatic camera identification from a multi-camera video stream
US20140056354A1 (en) * 2012-08-21 2014-02-27 Mediatek Inc. Video processing apparatus and method
CN109640168B (en) * 2018-11-27 2020-07-24 Oppo广东移动通信有限公司 Video processing method, video processing device, electronic equipment and computer readable medium
CN109814710B (en) * 2018-12-27 2022-05-13 青岛小鸟看看科技有限公司 Data processing method and device and virtual reality equipment
CN110557626B (en) * 2019-07-31 2021-06-08 华为技术有限公司 Image display method and electronic equipment
CN111277779B (en) * 2020-03-05 2022-05-06 Oppo广东移动通信有限公司 Video processing method and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203313319U (en) * 2013-06-09 2013-11-27 深圳创维-Rgb电子有限公司 Display system
CN103702059A (en) * 2013-12-06 2014-04-02 乐视致新电子科技(天津)有限公司 Frame rate conversion control method and device
CN109275011A (en) * 2018-09-03 2019-01-25 青岛海信传媒网络技术有限公司 The processing method and processing device of smart television motor pattern switching, user equipment
CN110636375A (en) * 2019-11-11 2019-12-31 RealMe重庆移动通信有限公司 Video stream processing method and device, terminal equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111277779A (en) 2020-06-12
WO2021175055A1 (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN111277779B (en) Video processing method and related device
CN111327908B (en) Video processing method and related device
US7952596B2 (en) Electronic devices that pan/zoom displayed sub-area within video frames in response to movement therein
US8866943B2 (en) Video camera providing a composite video sequence
CN108989830A (en) A kind of live broadcasting method, device, electronic equipment and storage medium
JP5190117B2 (en) System and method for generating photos with variable image quality
CN111356026B (en) Image data processing method and related device
US20130235223A1 (en) Composite video sequence with inserted facial region
CN111405339B (en) Split screen display method, electronic equipment and storage medium
EP2193662A2 (en) System and method for video coding using variable compression and object motion tracking
CN110971841B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112532808A (en) Image processing method and device and electronic equipment
CN104967778A (en) Focusing reminding method and terminal
CN113747240A (en) Video processing method, apparatus, storage medium, and program product
CN113225606A (en) Video barrage processing method and device
CN111654747B (en) Bullet screen display method and device
CN110990088A (en) Data processing method and related equipment
CN110941413B (en) Display screen generation method and related device
CN110602410A (en) Image processing method and device, aerial camera and storage medium
CN113691737B (en) Video shooting method and device and storage medium
CN113438436B (en) Video playing method, video conference method, live broadcast method and related equipment
CN114143471A (en) Image processing method, system, mobile terminal and computer readable storage medium
CN112887620A (en) Video shooting method and device and electronic equipment
CN115423728A (en) Image processing method, device and system
CN111367598A (en) Action instruction processing method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant