CN116156250A - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN116156250A
CN116156250A CN202310149440.2A CN202310149440A CN116156250A CN 116156250 A CN116156250 A CN 116156250A CN 202310149440 A CN202310149440 A CN 202310149440A CN 116156250 A CN116156250 A CN 116156250A
Authority
CN
China
Prior art keywords
image frames
target
image
frame
motion speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310149440.2A
Other languages
Chinese (zh)
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310149440.2A priority Critical patent/CN116156250A/en
Publication of CN116156250A publication Critical patent/CN116156250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video processing method and device, and belongs to the technical field of videos. The embodiment of the application provides a video processing method, which comprises the following steps: determining the motion speed of each object in an original video based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.

Description

Video processing method and device
Technical Field
The application belongs to the technical field of video, and particularly relates to a video processing method and a device thereof.
Background
As the functions of mobile phone cameras are becoming more and more abundant, the scenes in which users use mobile phones to record video are becoming more and more abundant. The mobile phone video recording function comprises a fast lens video recording and a slow lens video recording, and the shooting object can be accelerated or decelerated through the fast lens video recording or the slow lens video recording.
In the prior art, the function of fast lens video or slow lens video can only be fully accelerated or fully decelerated, which results in faster moving objects in the fast lens and slower moving objects in the slow lens. However, in the video recorded by the single fast-shot video recording or slow-shot video recording function, there may be an object moving too fast or too slow, and the problem of disorder of the object too fast or too slow may cause the video playing effect to be poor.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method and a device thereof, which can solve the problem of poor video playing effect caused by object motion disorder in the prior art.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
determining the motion speed of each object in an original video based on a plurality of image frames included in the original video;
determining an object with the motion speed meeting the target condition in the objects as a reference object;
according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object;
And generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first determining module is used for determining the motion speed of each object in the original video based on a plurality of image frames included in the original video;
the second determining module is used for determining an object with the motion speed meeting the target condition in the objects as a reference object;
the adjusting module is used for adjusting the number of the image frames contained in the original video according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object; the number of the image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
and the generating module is used for generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the video processing method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the video processing method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executed by at least one processor to implement the steps of the video processing method as described in the first aspect.
In the embodiment of the application, the motion speed of each object in the original video is determined based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects. In this way, the object with the motion speed meeting the target condition is taken as a reference object, and the image frames corresponding to the target object are obtained based on the motion speed of the reference object and the motion speed of the target object by taking the reference object as a reference, so that the number of the image frames contained in the image frames corresponding to each target object is matched with the motion speed of the reference object and the motion speed of each target object, therefore, the target video is generated according to the plurality of image frames and the image frames corresponding to each target object, the number of the image frames corresponding to any target object in the target video is matched with the motion speed of the reference object and the motion speed of each target object, the motion speeds of the objects in the target video are more coordinated, and the problem of poor video effect caused by motion disorder can be avoided to a certain extent, and the video playing effect is improved.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating steps of another video processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments with reference to the accompanying drawings.
An embodiment of the present application provides a video processing method, as shown in fig. 1, where the video processing method includes:
step S1, determining the motion speed of each object in the original video based on a plurality of image frames included in the original video.
It should be noted that, video recording is to take a frame of image with a fixed interval of time, for example, taking a frame of image at an interval of 1/30 second, that is, taking 30 frames of image per second, so as to obtain a plurality of image frames. The captured plurality of image frames are then packed and compressed to generate a video file. By decompressing the video file, a plurality of image frames for generating the video file may be obtained.
In this embodiment of the present application, the original video may be a video file obtained by shooting through a mobile terminal such as a mobile phone, a tablet, or a photographing device such as a camera. The plurality of image frames included in the original video may be obtained by decompressing the original video into an image of one frame by one frame. The plurality of image frames may be arranged in a certain order, and the arrangement order of the plurality of image frames may be a photographing order of the plurality of image frames in the original video. The object in the original video may be an object whose picture content is displayed in a plurality of image frames, for example, sun, tree, pedestrian, or the like.
According to the method and the device, the motion trail of each object in the plurality of image frames can be obtained according to the position change of each object in the plurality of image frames in the original video, the motion time of each object in the plurality of image frames is determined according to the shooting interval time of the plurality of image frames, then the motion speed of each object is calculated according to the motion trail and the motion time of each object, and finally the motion speed of each object in the original video is determined. The motion track of any object may be specifically calculated according to the change of the coordinate positions of the object in the plurality of image frames, the motion time of the object in the plurality of image frames may be calculated according to the frame rate of the original video or the shooting interval time of the plurality of image frames, or other manners may be adopted to obtain the motion track and the motion time of the object.
And S2, determining the object with the motion speed meeting the target condition in the objects as a reference object.
In this embodiment of the present application, the target condition may be that a difference between the movement speed of the object and the reference speed is a preset value. The preset value may be zero or a fixed value, which is not limited in this embodiment of the present application. The reference speed may be an arithmetic average corresponding to the movement speed of each object. Alternatively, the reference speed may be a weighted average corresponding to the motion speed of each object, specifically, the weight corresponding to any one object may be set according to actual needs, and the weighted average may be calculated according to the motion speed of each object and the weight corresponding to each object, and the weighted average is used as the reference speed.
For example, the sun is moving slowly, the pedestrian is moving fast, the weight corresponding to the sun is set to 10, the weight corresponding to the pedestrian is set to 1, and thus the influence of the moving speed of the object moving extremely fast or slow on the reference speed can be reduced.
If the number of objects that meet the target condition is equal to or greater than 2, the movement speed of the reference object may be determined according to the movement speeds of all the objects that meet the target condition, specifically, an average speed corresponding to the movement speeds of all the objects that meet the target condition may be calculated, and the average speed may be used as the movement speed of the reference object.
Alternatively, the target condition may be that the absolute value of the difference between the movement speed of the object and the reference speed is the smallest. In the embodiment of the present application, according to the difference between the motion speed and the reference speed of each object, the object with the smallest absolute value of the difference may be used as the reference object. The movement speed of the reference object refers to the movement speed of the object whose absolute value of the difference is smallest.
Step S3, according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted, and the image frames corresponding to the target object are obtained; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object.
In this embodiment of the present application, for any target object other than the reference object in each object, the difference between the movement speed of the reference object and the movement speed of the target object may be calculated according to the movement speed of the reference object and the movement speed of the target object. If the difference is positive, indicating that the reference object moves slower than the target object, the number of image frames included in the plurality of image frames may be proportionally reduced according to the proportional relationship between the movement speed of the reference object and the movement speed of the target object, and the reduced plurality of image frames may be regarded as the image frames corresponding to the target object. If the difference is negative, indicating that the reference object moves faster than the target object, the number of image frames included in the plurality of image frames may be proportionally increased according to the proportional relationship between the movement speed of the reference object and the movement speed of the target object, and the plurality of image frames after the increase may be regarded as the image frames corresponding to the target object.
In the embodiment of the present application, the number of image frames corresponding to the target object may be the number of image frames remaining after the number of image frames contained in the plurality of image frames is proportionally reduced, or the number of all image frames after the number of image frames contained in the plurality of image frames is proportionally increased. Therefore, the number of image frames corresponding to the target object matches the proportional relationship of the movement speed of the reference object and the movement speed of the target object, so that the number of image frames corresponding to the target object matches the movement speed of the reference object and the movement speed of the target object.
The image frames corresponding to the target object may be Z image frames arranged in a certain order, Z is a positive integer, and the arrangement order of the Z image frames may be determined according to the shooting order of the plurality of image frames. Specifically, the number of the plurality of image frames is defined as M, if M > Z, the arrangement sequence of the Z image frames may be identical to the arrangement sequence of the image frames corresponding to the Z image frames in the plurality of image frames, and if M < Z, the arrangement sequence of the Z image frames may be determined according to the arrangement sequence of the plurality of image frames and the number of image frames that increases proportionally to the plurality of image frames.
And S4, generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
In this embodiment of the present application, for an image frame corresponding to any target object, the image frame corresponding to the target object and a plurality of image frames may be in one-to-one correspondence according to respective arrangement sequences, then the image frames corresponding to one-to-one are fused to obtain a new image frame, and the obtained new image frame is used as the latest image frame corresponding to the target object. The sub-image of the target object in the image frame corresponding to the target object can be obtained through matting, and then the image parts at the corresponding positions in the image frames corresponding to the target object one by one are replaced according to the sub-image of the target object, so that the image frames corresponding to the target object one by one are fused to obtain a new image.
In this embodiment of the present application, image frames corresponding to each target object may be sequentially fused into a plurality of image frames, and specifically, sub-images included in an image frame corresponding to a first target object may be fused into a plurality of image frames in a matting and replacing manner, so as to obtain a plurality of image frames after primary fusion. And then merging the sub-images contained in the image frames corresponding to the second target object into a plurality of image frames after primary merging in a matting and replacing mode, so that the image frames corresponding to the target objects are merged into the plurality of image frames in sequence, and a plurality of image frames which are all merged into the image frames corresponding to the target objects are obtained. And finally, compressing and generating a video file by utilizing video compression software according to all the fused multiple image frames to serve as a target video.
In the embodiment of the application, the motion speed of each object in the original video is determined based on a plurality of image frames included in the original video; determining an object with the motion speed meeting the target condition in the objects as a reference object; according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object; and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects. In this way, the object with the motion speed meeting the target condition is taken as a reference object, and the image frames corresponding to the target object are obtained based on the motion speed of the reference object and the motion speed of the target object by taking the reference object as a reference, so that the number of the image frames contained in the image frames corresponding to each target object is matched with the motion speed of the reference object and the motion speed of each target object, therefore, the target video is generated according to the plurality of image frames and the image frames corresponding to each target object, the number of the image frames corresponding to any target object in the target video is matched with the motion speed of the reference object and the motion speed of each target object, the motion speeds of the objects in the target video are more coordinated, and the problem of poor video effect caused by motion disorder can be avoided to a certain extent, and the video playing effect is improved.
Optionally, step S3 may include the steps of:
and S31, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object under the condition that the motion speed of the target object is smaller than the motion speed of the reference object, so as to obtain the image frames corresponding to the target object.
In this embodiment of the present application, the difference between the motion speed of the reference object and the motion speed of the target object may be calculated according to the motion speed of the reference object and the motion speed of the target object, and if the difference is positive, it indicates that the motion speed of the target object is less than the motion speed of the reference object, which indicates that the target object moves slower than the reference object.
In this embodiment of the present application, when the movement speed of the target object is smaller than the movement speed of the reference object, the movement speed ratio of the reference object to the target object is calculated according to the movement speed of the reference object and the movement speed of the target object, where the movement speed ratio is a value greater than 1. The frame is extracted from the plurality of image frames according to the actual value of the motion speed ratio, and then the extracted images are arranged according to the corresponding sequence in the plurality of image frames to be used as the image frames corresponding to the target object. The integer part can be used as the number of the frames after the actual value of the motion speed ratio is rounded.
And step S32, under the condition that the movement speed of the target object is not less than the movement speed of the reference object, performing frame interpolation processing on the plurality of image frames according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object.
In this embodiment of the present application, the difference between the motion speed of the reference object and the motion speed of the target object may be calculated according to the motion speed of the reference object and the motion speed of the target object, and if the difference is zero or negative, it indicates that the motion speed of the target object is not less than the motion speed of the reference object, which indicates that the motion speed of the target object is consistent with or faster than the motion speed of the reference object.
In this embodiment of the present application, when the movement speed of the target object is not less than the movement speed of the reference object, the ratio of the movement speed of the target object to the movement speed of the reference object is calculated according to the movement speed of the reference object and the movement speed of the target object, where the ratio of the movement speeds is a value greater than or equal to 1. The image frames can be interpolated according to the actual value of the motion speed ratio, the image frames used for interpolation can be obtained by copying the image frames before or after the interpolation position, and then the image frames after interpolation are taken as the image frames corresponding to the target object. Wherein the integer part can be used as the number of the inserted frames after the actual numerical value of the motion speed ratio is rounded.
In this embodiment of the present application, when the motion speed of the target object is less than the motion speed of the reference object, frame extraction processing is performed on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object, so as to obtain an image frame corresponding to the target object;
and under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object. Therefore, the object with the motion speed smaller than that of the reference object or the object with the motion speed not smaller than that of the reference object can be distinguished conveniently according to the motion speed of the reference object, and the image frames corresponding to the object with the motion speed smaller than that of the reference object can be obtained by performing frame extraction processing on the plurality of image frames, so that the motion speed of the object displayed in the corresponding image frames can be increased. And obtaining an image frame corresponding to a target object with a motion speed not less than that of the reference object by performing frame interpolation processing on the plurality of image frames, so that the motion speed of the target object displayed in the corresponding image frame is reduced. The motion speed of the target object which moves fast compared with the reference object is slowed down to a certain extent, and the motion speed of the target object which moves slow compared with the reference object is quickened, so that the motion speed of a single object can be prevented from being too fast or too slow, and the motion speeds of the objects are more coordinated.
Optionally, step S31 may include the steps of:
step S311, determining a frame extraction ratio according to the motion speed of the reference object and the motion speed of the target object.
In this embodiment of the present application, a motion speed ratio of the reference object to the target object may be determined according to a motion speed of the reference object and a motion speed of the target object, and the motion speed ratio is used as the frame extraction ratio. Specifically, the following formula (1) is referred to:
Figure BDA0004090287800000091
wherein r is i Representing the frame extraction ratio, v, corresponding to the ith object in each object 0 Representing the speed of movement of a reference object, v i Representing the speed of movement of the ith object. Wherein the calculation result of formula (1) retains the integer part.
Step S312, extracting an image frame from the plurality of image frames according to the frame extraction ratio as a first target image frame.
In this embodiment, the plurality of image frames may be divided into a plurality of groups according to the frame extraction ratio, and the first image frame of each group is extracted as the first target image frame, that is, according to the frame extraction ratio r i From each r of a plurality of image frames i The image frames extract an image frame, and then the extracted image frame is used as a first target image frame.
And step S313, forming an image frame corresponding to the target object according to the first target image frame.
In this embodiment of the present application, the first target image frames may be arranged in the extraction order, and the arranged first target image frames may be used as image frames corresponding to the target object.
In the embodiment of the application, the frame extraction proportion is determined according to the motion speed of the reference object and the motion speed of the target object; extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame; and forming an image frame corresponding to the target object according to the first target image frame. Therefore, the frame extraction processing can be conveniently carried out on the plurality of image frames according to the frame extraction proportion, the frame extraction processing efficiency is improved, and the image frames corresponding to the target object are conveniently obtained according to the extracted first target image frames, so that the acquisition efficiency of the image frames corresponding to the target object is improved.
Optionally, step S32 may include the steps of:
step S321, determining the frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object.
In this embodiment of the present application, the motion speed ratio of the target object to the reference object may be determined according to the motion speed of the reference object and the motion speed of the target object, and the motion speed ratio is used as the frame insertion ratio.
And step S322, acquiring an image frame from the plurality of image frames according to the frame inserting proportion as a second target image frame.
In the embodiment of the present application, the plurality of image frames may be divided into a plurality of groups according to the interpolation ratio, and the first image frame of each group is copied as the second target image frame, that is, according to the interpolation ratio x i From each x of a plurality of image frames i The image frames duplicate one image frame and then the duplicated image frame is taken as a second target image frame.
Step S323, inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
In this embodiment of the present application, the second target image frame may be sequentially inserted into the plurality of image frames according to the corresponding copy positions in the plurality of image frames, and the plurality of image frames after the frame insertion may be used as the image frames corresponding to the target object. Specifically, for any second target image frame, the second target image frame may be inserted in front of or behind the image frame corresponding to the copy position according to the copy position of the second target image frame in the plurality of image frames, so that the second target image frame is inserted in the plurality of image frames. This is by way of illustration only and the present embodiments are not limited thereto.
In the embodiment of the application, the frame inserting proportion is determined according to the motion speed of the reference object and the motion speed of the target object; acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame; and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted. Therefore, the frame inserting process can be conveniently carried out on the plurality of image frames according to the frame inserting proportion, the frame inserting process efficiency is improved, the image frames corresponding to the target object are conveniently obtained according to the obtained second target image frames, and therefore the obtaining efficiency of the image frames corresponding to the target object is improved.
Optionally, step S1 may include the steps of:
step S11, obtaining image coordinate information of each object in a plurality of image frames included in the original video.
In this embodiment of the present application, the image of the image frame may be divided into regions according to the positions of objects in a plurality of image frames included in the original video, where each region includes only one object, and the objects in each region are marked by the marks, for example, the plurality of image frames include sun, tree, pedestrian, etc., and the object a, object B, and object C may be used as the marks corresponding to the sun, tree, and pedestrian, respectively, to mark the sun, tree, and pedestrian.
In this embodiment of the present application, each image frame in the plurality of image frames may be coordinated, specifically, a lower left corner of the image frame may be used as an origin of coordinates, and two sides adjacent to the origin of coordinates of the image frame may be used as coordinate axes to construct a coordinate system. For each object on any image frame, the coordinates of the center position of the object in the coordinate system corresponding to the image frame can be obtained according to the position of the object in the image frame and used as the image coordinates of the object on the image frame. Correspondingly, for any object, respectively obtaining the image coordinates of the object in the coordinate system corresponding to each of the plurality of image frames, and forming the image coordinate information corresponding to the object, thereby obtaining the image coordinate information of each object in the plurality of image frames included in the original video. For example, for the object a, the coordinates (Ax i ,Ay i ) And according to the image coordinates (Ax i ,Ay i ) Generating an image coordinate information matrix corresponding to the object A, as shown in the following matrix (2):
Figure BDA0004090287800000111
wherein, the matrix A represents an image coordinate information matrix corresponding to the object A, ax 1 Representing the x-coordinate, ay of object a on the 1 st image frame 1 Representing the y-coordinate of object A on the 1 st image frame, and so on, ax n Representing the x-coordinate, ay of the object a on the nth image frame n Representing the y-coordinate of object a on the nth image frame. This is by way of example only, and the embodiments of the present application are not limited thereto.
And step S12, calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames.
In the embodiment of the present application, the frame numbers of the plurality of image frames, for example, f, may be obtained by numbering the plurality of image frames according to the shooting sequence of the plurality of image frames 1 ,f 2 ,f 3 Etc. The corresponding speed of each object in each image frame may be calculated from the frame numbers of the plurality of image frames, the image coordinate information of each object, and the frame rate of the original video. Specifically, for any one of the objects, reference may be made to the following formula (3):
Figure BDA0004090287800000121
/>
where fps is the number of transmission frames per second (Frames Per Second, fps), representing the frame rate of the original video. X is x 1 Representing the x-coordinate, x of the center point of the object in the previous image frame 2 Representing the x-coordinate, y-coordinate of the center point of the object in the subsequent image frame 1 Representing the y-coordinate, y of the center point of the object in the previous image frame 2 Representing the y-coordinate, f of the center point of the object in the subsequent image frame 1 Frame number f representing the previous image frame 2 Frame number representing the next image frame. For example, for object A, the velocity v of the 1 st frame of object A can be obtained by calculation of equation (3) 1 Velocity v of object A frame 2 2 And sequentially obtain v 3 、v 4 …v n
And step S13, determining the movement speed of each object according to the speed of each object corresponding to each image frame.
In the embodiment of the present application, the moving speed average value of each object may be calculated according to the speed of each object corresponding to each image frame, and the moving speed average value of each object may be used as the moving speed of each object. Specifically, for any one of the objects, reference may be made to the following formula (4):
Figure BDA0004090287800000122
wherein v is i The speed of the object corresponding to the i-th image frame is represented, and N represents the number of image frames of the plurality of image frames. For example, the average velocity V corresponding to the object A can be calculated with reference to the formula (4) A And V is combined with A As the movement speed of the object a.
In the embodiment of the application, the image coordinate information of each object in a plurality of image frames included in the original video is obtained; calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames; and determining the movement speed of each object according to the speed of each object corresponding to each image frame. In this way, the speed of each object corresponding to each image frame can be conveniently calculated according to the image coordinate information, the frame rate of the original video and the plurality of image frames by acquiring the image coordinate information of each object, so that the motion speed of each object in the plurality of image frames can be conveniently determined according to the speed of each object corresponding to each image frame.
Optionally, step S2 may include the steps of:
and S21, determining a reference speed according to the movement speed of each object and the preset weight.
In this embodiment of the present application, the preset weight corresponding to any one object may be determined according to the motion speed of the object relative to other objects, for example, the sun moves very slowly relative to the pedestrian, if the preset weight corresponding to the pedestrian is 1, the preset weight corresponding to the sun may be set to 10, so that the influence of the motion speed of the object that moves very fast or very slow on the reference speed value may be reduced.
In this embodiment of the present application, the motion speed corresponding to any object may be multiplied by a preset weight, and then the products of the motion speeds corresponding to the objects and the preset weight are summed and averaged to obtain a weighted average of the motion speeds of the objects, and the weighted average is used as the reference speed. Specifically, the following equations (5) and (6) are referred to:
Figure BDA0004090287800000131
Figure BDA0004090287800000132
wherein V is t Representing the reference speed, v i Representing the corresponding movement speed of the ith object, a i The preset weight corresponding to the ith object is represented, and N represents the total number of objects.
And step S22, determining an object with the smallest absolute value of the difference value between the motion speed and the reference speed in the objects as a reference object.
In the embodiment of the application, difference calculation is performed according to the motion speed and the reference speed of each object, the difference between the motion speed and the reference speed of each object is obtained, and according to the difference between the motion speed and the reference speed of each object, the object corresponding to the difference with the smallest absolute value is selected as the reference object. For example, if |V in each object A -V t And selecting the object A as a reference object if the I is minimum.
In the embodiment of the application, the reference speed is determined according to the movement speed of each object and preset weight; and determining the object with the smallest absolute value of the difference value between the motion speed and the reference speed in the objects as a reference object. Therefore, the influence of the movement speed of each object on the reference speed can be adjusted through the preset weight, so that the reference speed is more coordinated with the movement speed of each object. Further, according to the absolute value of the difference between the reference speed and the motion speed of each object, the reference object is selected from the objects, and the reference speed is coordinated with the motion speed of each object after being adjusted by the preset weight, so that the object with the smallest absolute value of the difference between the motion speed and the reference speed in each object is determined as the reference object, and the reference object has more reference value.
Optionally, step S4 may include the steps of:
step S41, for any one of the target objects, acquiring a sub-image and an image coordinate of the target object according to an image frame corresponding to the target object.
In this embodiment, for any target object, according to an image frame corresponding to the target object, the image portion including the target object in each image frame is respectively scratched, the image portion obtained by the scratched image is used as a sub-image of the target object, and coordinates of a center position of the image portion of the scratched image are used as image coordinates of the target object, so as to obtain the sub-image and the image coordinates of the target object corresponding to each image frame in the image frame corresponding to the target object.
Step S42, determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame includes a plurality of updated image frames.
In this embodiment of the present application, for an image frame corresponding to any target object, the image frame corresponding to the target object may be in one-to-one correspondence with a plurality of image frames according to respective arrangement sequences. For any one of the image frames corresponding to the target object, the coordinates, which are consistent with the image coordinates of the target object, in the image frames consistent with the image frames in sequence in the plurality of image frames can be determined according to the image coordinates of the target object in the image frames, and the coordinates consistent with the image coordinates are taken as the target coordinates corresponding to the image frames. And determining the target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object corresponding to each image frame in the same way.
In this embodiment of the present application, for an image frame corresponding to any target object, the image frame corresponding to the target object may be in one-to-one correspondence with a plurality of image frames according to respective arrangement sequences. For any one of the plurality of image frames, positioning can be performed according to the target coordinates corresponding to the image frame, and the sub-image corresponding to the target coordinates is determined as the sub-image to be updated. Then, determining the sub-image corresponding to the target coordinate at the corresponding image coordinate of the target object as the sub-image for updating, and replacing the sub-image to be updated with the sub-image for updating, so that the sub-image corresponding to the target coordinate in the image frame is updated as the sub-image corresponding to the target object. And finally, respectively updating the sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images corresponding to the target object in the same way, and taking the updated plurality of image frames as the latest image frames corresponding to the target object.
And step S43, generating the target video according to the latest image frames corresponding to the target objects.
In this embodiment of the present invention, after obtaining the latest image frame corresponding to the first target object, the latest image frame of the first target object may be used as a reference image frame of the second target object, and the target coordinates matched with the image coordinates in the reference image frame are determined according to the image coordinates of the second target object, and the sub-image corresponding to the target coordinates in the reference image frame is updated to the sub-image corresponding to the second target object, so as to obtain the latest image frame corresponding to the second target object. And by analogy, finally obtaining the latest image frame of the last object in the objects, and compressing the latest image frame to generate a video file as a target video by utilizing video compression software according to the image in the latest image frame. For example, the latest image frame of the pedestrian is taken as a reference image frame, the target coordinates matched with the image coordinates in the reference image frame are determined according to the image coordinates of the sun, and the sub-image corresponding to the target coordinates in the reference image frame is updated to the sub-image corresponding to the sun, so that the latest image frame corresponding to the sun is obtained.
In the embodiment of the application, for any target object, a sub-image and image coordinates of the target object are obtained according to an image frame corresponding to the target object; determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame comprises a plurality of updated image frames; and generating the target video according to the latest image frames corresponding to the target objects. In this way, the sub-images corresponding to the target coordinates in the plurality of image frames can be conveniently updated through the sub-images and the image coordinates corresponding to the target objects, so that the sub-images of the target objects in the plurality of image frames are matched with the image frames corresponding to the target objects. In addition, as the latest image frames comprise a plurality of updated image frames, the target video is generated according to the latest image frames corresponding to all target objects, so that the sub-images of all target objects in the target video are matched with the image frames corresponding to all target objects, the movement speeds of all objects in the target video are more coordinated, the problem of poor video effect caused by movement disorder can be avoided to a certain extent, and the playing effect of the video is improved.
Optionally, before step S43, the method further includes:
step S5, clipping the latest image frames corresponding to the target objects according to the target quantity to obtain clipped latest image frames corresponding to the target objects; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects.
In the embodiment of the present application, since the number of image frames included in the latest image frames corresponding to each target object is subjected to frame extraction or frame interpolation, the number of image frames included in each latest image frame is different. The frame extraction process reduces the number of image frames contained in the corresponding latest image frame relative to the plurality of image frames, and the frame insertion process increases the number of image frames contained in the corresponding latest image frame relative to the plurality of image frames.
In this embodiment of the present application, the target object having the smallest number of image frames included in the latest image frames corresponding to each target object may be used as the reference object, and the number of image frames included in the latest image frames corresponding to the reference object may be used as the target number. And then adjusting the number of the image frames contained in the latest image frames corresponding to each target object according to the target number, specifically, reserving the number of the image frames which are consistent with the target number from the first image frame in each latest image frame, and deleting other image frames, namely, cutting the latest image frames corresponding to each target object, so that the number of the image frames contained in the cut latest image frames is consistent with the target number, and the number of the image frames contained in the cut latest image frames is matched with the target number. The method comprises the steps of cutting each latest image frame according to the number of targets, reserving a part which is common to the latest image frames corresponding to each target object, and taking the reserved common part of each latest image frame as the cut latest image frame corresponding to each target object.
In the embodiment of the application, the latest image frames corresponding to the target objects are cut according to the target quantity, so that the cut latest image frames corresponding to the target objects are obtained; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects. Since the target number is the number of image frames included in the image frames corresponding to the reference object, which is the target object having the smallest number of image frames included in the image frames corresponding to the respective target objects, it is possible to preserve the image frames of the common part corresponding to the target number in the latest image frames corresponding to the respective target objects, clip the image frames exceeding the target number so that the number of image frames included in the clipped latest image frames corresponding to the respective target objects matches the target number,
fig. 2 is a flowchart illustrating another video processing method according to an embodiment of the present application, and as shown in fig. 2, the video processing method includes: steps 601 to 604 are respectively data marking, frame motion speed calculation, motion speed tuning and content fusion. The data marking may divide the image frame of the image according to the positions of the objects in the image frames included in the original video, each area only includes one object, and marks the objects in each area through the marks, for example, the image frames include sun, tree, pedestrian, etc., and the object a, object B, and object C may be used to represent sun, tree, pedestrian, respectively. The frame motion speed calculation may calculate the corresponding speed of the target object in each image frame according to the frame numbers of the plurality of image frames, the image coordinate information of the target object and the frame rate of the original video by using the formula (3), then calculate the average speed of the target object according to the speed of the target object corresponding to each image frame by using the formula (4), take the average speed as the motion speed of the target object, and finally determine the reference speed according to the motion speed of each object and the preset weight by using the formula (5).
The motion speed adjustment and optimization can determine a frame extraction proportion or a frame insertion proportion according to the motion speed of the reference object and the motion speed of the target object, and adjust the number of image frames contained in the plurality of image frames according to the frame extraction proportion or the frame insertion proportion to obtain the image frames corresponding to the target object. The content fusion can sequentially and one-to-one correspond to the image frames corresponding to the target objects with the image frames, and then respectively fuse the image frames corresponding to the target objects with the image frames to obtain new images, so that the latest image frames corresponding to the target objects are obtained. Then, the image frames corresponding to the target objects are cut according to the target quantity, the common parts of the image frames corresponding to the target objects are reserved, and the video compression software is utilized to compress the common parts of the image frames corresponding to the target objects to generate a video file serving as a target video. Under the condition that the target video comprises sun and pedestrians, the scene of running of people on the ground and alternate days and months in the sky is displayed when the target video is played, fast and slow moving objects are organically fused, and the playing effect of the video is improved.
An embodiment of the present application provides a video processing apparatus, as shown in fig. 3, the apparatus 70 includes:
A first determining module 701, configured to determine a motion speed of each object in an original video based on a plurality of image frames included in the original video;
a second determining module 702, configured to determine an object whose movement speed meets a target condition among the objects as a reference object;
an adjusting module 703, configured to adjust the number of image frames included in the original video according to the motion speed of the reference object and the motion speed of the target object, so as to obtain an image frame corresponding to the target object; the number of the image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
and the generating module 704 is configured to generate a target video according to the plurality of image frames and the image frames corresponding to the target objects.
Optionally, the adjusting module 703 is specifically configured to:
under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
and under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
Optionally, the adjusting module 703 is specifically further configured to:
determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
and forming an image frame corresponding to the target object according to the first target image frame.
Optionally, the adjusting module 703 is specifically further configured to:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
Optionally, the first determining module 701 is specifically configured to:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
And determining the movement speed of each object according to the speed of each object corresponding to each image frame. Optionally, the second determining module 702 is specifically configured to:
determining a reference speed according to the motion speed of each object and preset weights;
and determining the object with the smallest absolute value of the difference value between the motion speed and the reference speed value in the objects as a reference object.
Optionally, the generating module 704 is specifically configured to:
for any target object, acquiring a sub-image and image coordinates of the target object according to an image frame corresponding to the target object;
determining target coordinates matched with the image coordinates in the plurality of image frames according to the image coordinates of the target object, and updating sub-images corresponding to the target coordinates in the plurality of image frames into the sub-images of the target object to obtain the latest image frame corresponding to the target object; wherein the latest image frame comprises a plurality of updated image frames;
and generating the target video according to the latest image frames corresponding to the target objects.
Optionally, the apparatus 70 further includes:
the clipping module is used for clipping the latest image frames corresponding to the target objects according to the target quantity before the generating module generates the target video according to the latest image frames corresponding to the target objects, so as to obtain the clipped latest image frames corresponding to the target objects; the number of the image frames contained in the latest image frames after clipping is matched with the target number, the target number is the number of the image frames contained in the image frames corresponding to the reference object, and the reference object is the target object with the minimum number of the image frames contained in the image frames corresponding to the target objects.
The video processing device has the same advantages as the video processing method described above over the prior art, and will not be described here again.
The video processing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing device provided in this embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 4, the embodiment of the present application further provides an electronic device 80, including a processor 801 and a memory 802, where a program or an instruction capable of running on the processor 801 is stored in the memory 802, and the program or the instruction implements each step of the embodiment of the video processing method when executed by the processor 801, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 5 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 90 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 90 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
It should be appreciated that in embodiments of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, with the graphics processor 9041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes at least one of a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 909 may include a volatile memory or a nonvolatile memory, or the memory 909 may include both volatile and nonvolatile memories. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 909 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video processing method, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video processing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. A method of video processing, the method comprising:
determining the motion speed of each object in an original video based on a plurality of image frames included in the original video;
determining an object with the motion speed meeting the target condition in the objects as a reference object;
according to the motion speed of the reference object and the motion speed of the target object, the number of image frames contained in the original video is adjusted to obtain image frames corresponding to the target object; the target object is any object except the reference object in the objects, and the number of image frames corresponding to the target object is matched with the moving speed of the reference object and the moving speed of the target object;
and generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
2. The method according to claim 1, wherein the adjusting the number of image frames included in the original video according to the motion speed of the reference object and the motion speed of the target object to obtain the image frame corresponding to the target object includes:
under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
And under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
3. The method according to claim 2, wherein the performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain the image frame corresponding to the target object includes:
determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
and forming an image frame corresponding to the target object according to the first target image frame.
4. The method according to claim 2, wherein the performing the frame interpolation process on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain the image frame corresponding to the target object includes:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
Acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
5. The method of claim 1, wherein determining the motion speed of each object in the original video based on a plurality of image frames included in the original video comprises:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
and determining the movement speed of each object according to the speed of each object corresponding to each image frame.
6. A video processing apparatus, the apparatus comprising:
the first determining module is used for determining the motion speed of each object in the original video based on a plurality of image frames included in the original video;
the second determining module is used for determining an object with the motion speed meeting the target condition in the objects as a reference object;
The adjusting module is used for adjusting the number of the image frames contained in the original video according to the movement speed of the reference object and the movement speed of the target object to obtain the image frames corresponding to the target object; the number of the image frames corresponding to the target object is matched with the movement speed of the reference object and the movement speed of the target object;
and the generating module is used for generating a target video according to the plurality of image frames and the image frames corresponding to the target objects.
7. The apparatus of claim 6, wherein the adjustment module is specifically configured to:
under the condition that the motion speed of the target object is smaller than that of the reference object, performing frame extraction processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the target object to obtain image frames corresponding to the target object;
and under the condition that the motion speed of the target object is not less than that of the reference object, performing frame interpolation processing on the plurality of image frames according to the motion speed of the reference object and the motion speed of the object to obtain the image frames corresponding to the target object.
8. The apparatus of claim 7, wherein the adjustment module is further specifically configured to:
determining a frame extraction proportion according to the motion speed of the reference object and the motion speed of the target object;
extracting an image frame from the plurality of image frames according to the frame extraction proportion as a first target image frame;
and forming an image frame corresponding to the target object according to the first target image frame.
9. The apparatus of claim 7, wherein the adjustment module is further specifically configured to:
determining a frame inserting proportion according to the motion speed of the reference object and the motion speed of the target object;
acquiring an image frame from the plurality of image frames according to the frame insertion proportion as a second target image frame;
and inserting the second target image frame into the plurality of image frames, and forming an image frame corresponding to the target object according to the plurality of image frames after the second target image frame is inserted.
10. The apparatus of claim 6, wherein the first determining module is specifically configured to:
acquiring image coordinate information of each object in a plurality of image frames included in the original video;
calculating the speed of each object corresponding to each image frame in the plurality of image frames according to the image coordinate information of each object, the frame rate of the original video and the plurality of image frames;
And determining the movement speed of each object according to the speed of each object corresponding to each image frame.
CN202310149440.2A 2023-02-21 2023-02-21 Video processing method and device Pending CN116156250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310149440.2A CN116156250A (en) 2023-02-21 2023-02-21 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310149440.2A CN116156250A (en) 2023-02-21 2023-02-21 Video processing method and device

Publications (1)

Publication Number Publication Date
CN116156250A true CN116156250A (en) 2023-05-23

Family

ID=86355987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310149440.2A Pending CN116156250A (en) 2023-02-21 2023-02-21 Video processing method and device

Country Status (1)

Country Link
CN (1) CN116156250A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0974540A (en) * 1995-09-06 1997-03-18 Nippon Telegr & Teleph Corp <Ntt> Moving image encoding information generation device for real-time fast forward reproduction
JP2000125183A (en) * 1998-10-20 2000-04-28 Casio Comput Co Ltd Image pickup unit and method for photographing consecutive image
CN108900771A (en) * 2018-07-19 2018-11-27 北京微播视界科技有限公司 A kind of method for processing video frequency, device, terminal device and storage medium
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
CN111327908A (en) * 2020-03-05 2020-06-23 Oppo广东移动通信有限公司 Video processing method and related device
CN113067994A (en) * 2021-03-31 2021-07-02 联想(北京)有限公司 Video recording method and electronic equipment
CN113837136A (en) * 2021-09-29 2021-12-24 深圳市慧鲤科技有限公司 Video frame insertion method and device, electronic equipment and storage medium
US20230013753A1 (en) * 2020-03-27 2023-01-19 Vivo Mobile Communication Co., Ltd. Image shooting method and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0974540A (en) * 1995-09-06 1997-03-18 Nippon Telegr & Teleph Corp <Ntt> Moving image encoding information generation device for real-time fast forward reproduction
JP2000125183A (en) * 1998-10-20 2000-04-28 Casio Comput Co Ltd Image pickup unit and method for photographing consecutive image
CN108900771A (en) * 2018-07-19 2018-11-27 北京微播视界科技有限公司 A kind of method for processing video frequency, device, terminal device and storage medium
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
CN111327908A (en) * 2020-03-05 2020-06-23 Oppo广东移动通信有限公司 Video processing method and related device
US20230013753A1 (en) * 2020-03-27 2023-01-19 Vivo Mobile Communication Co., Ltd. Image shooting method and electronic device
CN113067994A (en) * 2021-03-31 2021-07-02 联想(北京)有限公司 Video recording method and electronic equipment
CN113837136A (en) * 2021-09-29 2021-12-24 深圳市慧鲤科技有限公司 Video frame insertion method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111222450B (en) Model training and live broadcast processing method, device, equipment and storage medium
CN113794829B (en) Shooting method and device and electronic equipment
CN112927241A (en) Picture capturing and thumbnail generating method, system, equipment and storage medium
CN114119373A (en) Image cropping method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN114758054A (en) Light spot adding method, device, equipment and storage medium
CN117152660A (en) Image display method and device
CN116156250A (en) Video processing method and device
Lv et al. P‐3.2: VR Air rescue immersive simulation system
CN116208733A (en) Video conference interaction method and device
CN114390205B (en) Shooting method and device and electronic equipment
CN112887621B (en) Control method and electronic device
CN114915730B (en) Shooting method and shooting device
CN114666513B (en) Image processing method and device
CN116506680B (en) Comment data processing method and device for virtual space and electronic equipment
US11682210B1 (en) Methods and device for video data analysis
CN115242981B (en) Video playing method, video playing device and electronic equipment
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium
CN117528179A (en) Video generation method and device
CN117788316A (en) Image processing method, apparatus, electronic device, medium, and computer program product
CN116761047A (en) Video processing method and device
CN115242976A (en) Shooting method, shooting device and electronic equipment
CN116797886A (en) Model training method, image processing device and electronic equipment
CN117271090A (en) Image processing method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination