CN111343500A - Video processing method and device and computer readable storage medium - Google Patents

Video processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN111343500A
CN111343500A CN202010126854.XA CN202010126854A CN111343500A CN 111343500 A CN111343500 A CN 111343500A CN 202010126854 A CN202010126854 A CN 202010126854A CN 111343500 A CN111343500 A CN 111343500A
Authority
CN
China
Prior art keywords
video
frame
frame image
source
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010126854.XA
Other languages
Chinese (zh)
Inventor
杨全海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Internet Security Software Co Ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN202010126854.XA priority Critical patent/CN111343500A/en
Publication of CN111343500A publication Critical patent/CN111343500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Abstract

The embodiment of the invention discloses a video processing method, a video processing device and a computer readable storage medium, which are used for solving the problem of frame skipping from a tail frame to a head frame during video carousel and comprise the following steps: acquiring a source video, wherein the source video comprises M frames of images, and M is an integer greater than 1; cutting a source video into two parts to obtain a first video and a second video, wherein the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers more than 1, and M is equal to the sum of K and N; and overlapping and splicing the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of overlapped frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L. The embodiment of the invention can solve the problem of frame skipping from the tail frame to the head frame during video carousel.

Description

Video processing method and device and computer readable storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a video processing method and apparatus, and a computer-readable storage medium.
Background
With the continuous development of multimedia technology, the chances of users contacting videos are more and more. The video is composed of a plurality of frames of images, and when the video is played, the video is played according to the sequence of each frame of image in the video. When the video is played in turn, the images are played according to the sequence of each frame of image in the video, and the playing is started from the first frame of the video again after the end frame of the video is played. Generally, the difference between the first frame and the last frame of the video is large, so that a user can obviously feel frame skipping when skipping from the last frame of the video to the first frame of the video. Therefore, how to solve the frame skipping from the last frame to the first frame in video carousel has become a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention discloses a video processing method, a video processing device and a computer readable storage medium, which are used for solving the problem of frame skipping from a tail frame to a head frame during video carousel.
A first aspect discloses a video processing method, comprising:
acquiring a source video, wherein the source video comprises M frames of images, and M is an integer greater than 1;
cutting the source video into two parts to obtain a first video and a second video, wherein the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers more than 1, and M is equal to the sum of K and N;
and overlapping and splicing the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of overlapped frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L.
In a possible implementation manner, the overlapping and splicing the first video and the second video according to the order of the second video and the first video to obtain a third video includes:
setting the first N-L frame images in the second video as the first N-L frame images of a third video;
setting a rear K-L frame image in the first video as a rear K-L frame image of the third video;
and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, …, and L.
In a possible implementation manner, the proportion of the second video in the middle L frame image of the third video decreases in the playing order, and the proportion of the first video in the middle L frame image of the third video increases in the playing order.
In a possible implementation manner, after the overlapping and splicing the first video and the second video according to the order of the second video and the first video to obtain a third video, the method further includes:
adjusting the frame rate of the third video to a threshold frame rate according to the frame number and the playing duration of the third video to obtain a dynamic material;
and under the condition that a generation instruction which is input by a user and used for generating the dynamic material and the first image into the animation is detected, generating the animation according to the dynamic material and the first image.
In one possible implementation, the method further includes:
and when a setting instruction for setting the animation as the wallpaper input by the user is detected, setting the animation as the wallpaper.
In one possible implementation, the method further includes:
and converting the format of the third video into a set format if the format of the third video is not the set format.
A second aspect discloses a video processing apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a source video, the source video comprises M frames of images, and M is an integer greater than 1;
the source video is divided into two parts to obtain a first video and a second video, the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers larger than 1, and M is equal to the sum of K and N;
and the splicing unit is used for performing overlapping splicing on the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L.
In a possible implementation manner, the splicing unit is specifically configured to:
setting the first N-L frame images in the second video as the first N-L frame images of a third video;
setting a rear K-L frame image in the first video as a rear K-L frame image of the third video;
and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, …, and L.
In a possible implementation manner, the proportion of the second video in the middle L frame image of the third video decreases in the playing order, and the proportion of the first video in the middle L frame image of the third video increases in the playing order.
In one possible implementation, the apparatus further includes:
an adjusting unit, configured to perform overlapping splicing on the first video and the second video according to the sequence of the second video and the first video by the splicing unit to obtain a third video, and then adjust a frame rate of the third video to a threshold frame rate according to a frame number and a playing duration of the third video to obtain a dynamic material;
and the generating unit is used for generating the animation according to the dynamic material and the first image under the condition of detecting a generating instruction which is input by a user and is used for generating the animation by the dynamic material and the first image.
In one possible implementation, the apparatus further includes:
and the setting unit is used for setting the animation as the wallpaper when detecting a setting instruction which is input by a user and is used for setting the animation as the wallpaper.
In one possible implementation, the apparatus further includes:
a conversion unit configured to convert a format of the third video into a set format if the format of the third video is not the set format.
A third aspect discloses a video processing apparatus, which includes a processor and a memory, the memory storing a set of computer program codes, and the processor causing the video processing apparatus to execute the video processing method disclosed in the first aspect or any one of the possible implementation manners of the first aspect by executing the computer program codes stored in the memory.
A fourth aspect discloses a computer-readable storage medium having stored therein a computer program or computer instructions which, when executed by a computer device, cause the computer device to implement a video processing method as disclosed in the first aspect or any one of the possible implementations of the first aspect.
A fifth aspect discloses a computer program product which, when run on a computer, causes the computer to perform the video processing method disclosed in the first aspect or any possible implementation of the first aspect.
In the embodiment of the invention, a source video is obtained, the source video comprises M frames of images, M is an integer larger than 1, the source video is cut into two parts to obtain a first video and a second video, the first video comprises a front K frames of images in the source video, the second video comprises a rear N frames of images in the source video, K and N are integers larger than 1, M is equal to the sum of K and N, the first video and the second video are overlapped and spliced according to the sequence of the second video and the first video to obtain a third video, the third video comprises M-L frames of images, L is the number of overlapped frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L. Therefore, the problem of frame skipping from the tail frame to the head frame during video carousel can be solved by processing the video.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention;
FIG. 2 is a flow chart of a video processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a video processing method according to an embodiment of the present invention;
FIG. 4 is a flow chart of another video processing method disclosed in the embodiment of the invention;
FIG. 5 is a schematic diagram of a data processing system according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a data read according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an animation effect according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
fig. 9 is a schematic structural diagram of another video processing apparatus according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below in conjunction with the appended drawings in the embodiments of the present invention, and it should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, indicate the presence of the described features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is to be understood that the terminology used in the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In the embodiments of the present invention, the expressions "first" and "second" are used to distinguish two entities with the same name but different names or different parameters, and it should be understood that "first" and "second" are only used for convenience of description and should not be construed as limitations of the embodiments of the present invention, and the descriptions thereof in the following embodiments are omitted. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
The embodiment of the invention discloses a video processing method, a video processing device and a computer readable storage medium, which are used for solving the problem of frame skipping from a tail frame to a head frame during video carousel. The following are detailed below.
In order to better understand a video processing method and apparatus disclosed in the embodiments of the present invention, a network architecture used in the embodiments of the present invention is described below. Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention. As shown in fig. 1, the network architecture may include a Personal Computer (PC) 101, a server 102, and a client 103. Communication between PC101 and server 102, between server 102 and client 103, and between PC101 and client 103 may be via a network, and communication may be based on any wired and wireless network, including but not limited to the internet, a wide area network, a metropolitan area network, a local area network, a Virtual Private Network (VPN), a wireless communication network, and so on. The PC101 may be a desktop computer, an all-in-one machine, a notebook computer, a palm computer, a tablet computer, etc., and the client 103 may be an Application (APP) corresponding to the server 102 and installed on the terminal to provide local services for the user.
The PC101 may be configured to acquire a video and transmit the video to the server 102. After receiving the video from the PC101, the server 102 stores the video. The client 103 is configured to obtain a video from the server 102, cut the video into two parts, obtain a first video and a second video, and perform overlapping splicing on the first video and the second video according to the sequence of the second video and the first video to obtain a third video. In one case, the PC101 is further configured to perform format conversion on the video in the case where the format of the video does not match the format supported by the client 103 corresponding to the server, and then perform the above steps. Accordingly, the client 103 does not need to format-convert the third video. In one case, the client 103 is further configured to format convert the third video if the format of the third video does not conform to the supported format.
The PC101 may also be configured to obtain a video, cut the video into two parts to obtain a first video and a second video, perform overlapping and splicing on the first video and the second video according to the order of the second video and the first video to obtain a third video, and send the third video to the server 102. After receiving the third video from the PC101, the server 102 stores the third video. And the client 103 is used for acquiring the third video from the server 102. The client 103 is further configured to adjust the frame rate of the third video according to the frame number and the play duration of the third video and the supported frame rate to obtain a dynamic material, and generate an animation according to the dynamic material and the first image when detecting a generation instruction for generating an animation from the dynamic material and one image, which is input by a user. The client 103 is further configured to set the animation as the wallpaper when detecting a setting instruction for setting the animation as the wallpaper, which is input by the user. In one case, the PC101 is further configured to perform format conversion on the third video if the format of the third video does not match the format supported by the client 103 corresponding to the server, and then perform the above steps. Accordingly, the client 103 does not need to format-convert the third video. In one case, the client 103 is further configured to perform format conversion on the third video if the format of the third video is inconsistent with the supported format, and then perform the above steps.
Referring to fig. 2, fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present invention based on the network architecture shown in fig. 1. Wherein the video processing method is described from the perspective of the client 103. As shown in fig. 2, the video processing method may include the following steps.
201. A source video is acquired.
The client can acquire a source video, and the source of the source video can be at least one of a local video, a network video, a live video and a webpage screen recording. The source video may include M frames of images, where M is an integer greater than 1. For example, please refer to fig. 3, fig. 3 is a schematic diagram of a video processing method according to an embodiment of the present invention. As shown in fig. 3, the source video may include 100 frames of images.
202. And cutting the source video to obtain a first video and a second video.
After the client acquires the source video, the client can cut the source video to obtain a first video and a second video, the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers greater than 1, and M is equal to the sum of K and N. The cutting can be carried out from any position in the middle of the source video, the middle of the source video comprises the image part which is obtained by removing the L frame image at the head part and the L frame image at the tail part, and the cutting can be carried out from any position in the middle of the source video because the picture change of two continuous frame images in the video is small. As shown in fig. 3, cutting the source video may obtain a first video and a second video, the first video may include the first 50 frames of images in the source video, and the second video may include the last 50 frames of images in the source video.
203. And overlapping and splicing the first video and the second video to obtain a third video.
After the client cuts the source video to obtain the first video and the second video, the first video and the second video may be overlapped and spliced according to the sequence of the second video and the first video to obtain a third video. Wherein the third video includes M-L frame images, L is the number of frames in which the first video and the second video overlap, L is an integer greater than 1, and K and N are integers greater than L.
Specifically, the first N-L frame images in the second video may be set as the first N-L frame images of the third video; setting a rear K-L frame image in the first video as a rear K-L frame image of a third video; and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, … and L. When the first video and the second video are overlapped and spliced according to the sequence of the second video and the first video, the proportion of the second video in the middle L frame image of the third video is reduced according to the playing sequence, and the proportion of the first video in the middle L frame image of the third video is increased according to the playing sequence. When the first video and the second video are overlapped and spliced according to the sequence of the second video and the first video, the transparency of each frame of image can be sequentially set from 100% to 0 according to the frame sequence (i.e. the selected frame number is arranged according to the playing sequence) according to the arithmetic progression, for example, the transparency of the 98 th frame is 80%, the transparency of the 99 th frame is 50%, and the transparency of each frame of image is 20%, and the transparency of the first video can be sequentially set from 0 to 100% according to the arithmetic progression according to the frame sequence, for example, the transparency of the 1 st frame is 20%, the transparency of the 2 nd frame is 50%, and the transparency of the 3 rd frame is 80%. And according to the selected front N-L frame images in the second video and the rear K-L frame images in the first video, corresponding the images of the continuous multiple frames one by one in sequence, namely, corresponding the rear L frame images in the second video and the front L frame images in the first video one by one in sequence. As shown in fig. 3, the acquired source video is first cut into two parts from any position, fig. 3 is a first video (1 st to 50 th frames) and a second video (51 st to 100 th frames) obtained by cutting between the 50 th frame and the 51 th frame, and the last 3 frames of images in the second video and the first 3 frames of images in the first video are sequentially overlapped in a one-to-one correspondence manner (wherein, the 1 st frame is overlapped with the 98 th frame, the 2 nd frame is overlapped with the 99 th frame, and the 3 rd frame is overlapped with the 100 th frame). And splicing the first video and the second video according to the sequence of the second video and the first video to obtain a third video comprising 97 frames of images.
It is to be understood that the overlapping splicing of the first video and the second video in the order of the second video and the first video is not limited to the above method, and that embodiments of the present invention may include other methods that enable the first video and the second video to reach a visually imperceptible transition.
In the video processing method depicted in fig. 2, a source video is obtained, where the source video includes M frames of images, and M is an integer greater than 1; cutting a source video into two parts to obtain a first video and a second video, wherein the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers more than 1, and M is equal to the sum of K and N; and overlapping and splicing the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of overlapped frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L. Therefore, the problem of frame skipping from the tail frame to the head frame during video carousel can be solved.
Referring to fig. 4, fig. 4 is a schematic flow chart of another video processing method according to an embodiment of the present invention based on the network architecture shown in fig. 1. Here, the video processing method is described from the perspective of the PC101, the server 102, and the client 103. As shown in fig. 4, the video processing method may include the following steps.
401. The PC acquires the source video.
Step 401 is the same as step 201, and please refer to step 201 for detailed description, which is not repeated herein.
402. The PC cuts the source video to obtain a first video and a second video.
Step 402 is the same as step 202, and please refer to step 202 for detailed description, which is not repeated herein.
403. And the PC carries out overlapping splicing on the first video and the second video to obtain a third video.
Step 403 is the same as step 203, and please refer to step 203 for detailed description, which is not repeated herein.
404. The PC judges whether the third video is in the set format of the client side, and uploads the third video to the server under the condition that the third video is in the set format of the client side; and if the third video is not in the set format of the client, converting the format of the third video into the set format, and uploading the third video to the server.
Under the first condition, after the PC carries out overlapping splicing on the first video and the second video to obtain a third video, whether the third video is in a set format of the client side can be judged, and the third video is uploaded to the server under the condition that the third video is in the set format of the client side; and if the third video is not in the set format of the client, converting the format of the third video into the set format, and uploading the third video to the server. In the second case, after the PC overlappingly splices the first video and the second video to obtain a third video, the third video is directly uploaded to the server, the client downloads the third video from the server, and the format conversion is performed on the third video under the condition that the format of the third video is inconsistent with the supported format.
The PC can compress the third video and then upload the third video to the server, and can preprocess the third video and take out video frames during compression. For example, a third video of 3 seconds, 90 frames of images can be taken, 30 frames per second. Referring to fig. 5, fig. 5 is a schematic diagram of data processing according to an embodiment of the disclosure. As shown in fig. 5, each frame of image in the third video may be taken out and written into wallpaper.
Figure BDA0002394658570000091
Figure BDA0002394658570000101
The method and the device can perform frame-by-frame compression processing on each frame of image, and during compression, under the condition that each frame of image is very regular, the compression algorithm can be simplified for compression, for example, 3 bits behind 8-bit pixel values are erased, and other compression algorithms can be used for realizing compression.
405. And the client adjusts the frame rate of the third video to be a threshold frame rate according to the frame number and the playing duration of the third video to obtain the dynamic material.
After the PC uploads the third video to the server, the client may download the third video from the server, and may adjust the frame rate of the third video to the threshold frame rate according to the frame number and the playing duration of the third video, so as to obtain a dynamic material. Specifically, after the client downloads the third video from the server, the frame rate of the third video is adjusted to the threshold frame rate according to the frame number and the playing duration of the third video, so as to obtain the dynamic material. And if the client downloads the compressed third video from the server, decompressing and reading the third video to obtain the dynamic material. Referring to fig. 6, fig. 6 is a schematic diagram of data reading according to an embodiment of the disclosure. As shown in fig. 6, the client may create a texture unit textunit corresponding to the third video multi-frame image, read the corresponding wallpaper. The calculation formula of data reading is as follows:
Index=(Current Time*N/Video Time)%N
index is an Index number of data in the wall paper. The frame rate of the dynamic material thus obtained is the frame rate of the third video. The playback speed of the dynamic texture can also be adjusted by increasing or decreasing the Video Time.
406. And when the client detects a generation instruction which is input by a user and used for generating the animation by the dynamic material and the first image, the animation is generated according to the dynamic material and the first image.
And after the dynamic material is obtained, when a generation instruction which is input by a user and used for generating the animation by the dynamic material and the first image is detected, the animation is generated according to the dynamic material and the first image. Referring to fig. 7, fig. 7 is a schematic diagram of an animation effect according to an embodiment of the disclosure.
407. And when the client detects a setting instruction for setting the animation as the wallpaper, which is input by the user, the animation is set as the wallpaper.
After the client generates the animation according to the dynamic material and the first image, the animation is set as the wallpaper when a setting instruction for setting the animation as the wallpaper, which is input by a user, is detected. The client can also respond to a preview instruction of a user to enable the user to know the specific effect of the generated wallpaper in advance so as to avoid that the user is dissatisfied with the generated wallpaper after the setting is finished, the client can not set the generated wallpaper as the wallpaper of the terminal equipment immediately after the wallpaper is generated, but only displays the generated wallpaper on the current display interface, so that the user can preview the effect of the terminal equipment after the wallpaper is set in advance, if the effect of the user on the generated wallpaper is satisfactory, the generated wallpaper can be used as the wallpaper of the terminal equipment through a determining button on the current display interface, and if the effect of the user on the generated wallpaper is not satisfactory, a corresponding cancel button can be triggered, namely the generated wallpaper is not used as the wallpaper of the terminal equipment. In practical applications, a cancel instruction of a user may be received through a display interface of a current wallpaper effect preview, where a specific form of an identifier that triggers the cancel instruction may be configured according to actual needs, for example, the specific form may be a specified trigger button or an input box of the display interface of the current wallpaper effect preview, or may also be a voice instruction of the user, and specifically, for example, a "cancel" virtual button may be displayed on the display interface of the current wallpaper effect preview, and an operation of clicking the button by the user is a cancel instruction sent by the user.
In the video processing method described in fig. 4, the method is described from the perspective of a PC, a server, and a client, the PC acquires a source video, cuts the source video to obtain a first video and a second video, and performs overlapping splicing on the first video and the second video to obtain a third video, the PC determines whether the third video is in a set format of the client, and uploads the third video to the server if the third video is in the set format of the client; and if the third video is not in the set format of the client, converting the format of the third video into the set format, and uploading the third video to the server. The client can download the third video from the server, adjust the frame rate of the third video to the threshold frame rate according to the frame number and the playing duration of the third video to obtain the dynamic material, and generate the animation according to the dynamic material and the first image under the condition that a generation instruction for generating the animation from the dynamic material and the first image, which is input by a user, is detected. Therefore, the PC can process the source video to obtain a third video, and the problem of frame skipping from the tail frame to the head frame during video carousel can be solved through video processing. After the client obtains the third video, animation generation and wallpaper setting are performed according to the input of the user, and therefore video processing can be achieved without the need of the client, memory consumption of terminal equipment of the client can be reduced, and power consumption is reduced.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the video processing apparatus may include:
an obtaining unit 801, configured to obtain a source video, where the source video includes M frames of images, and M is an integer greater than 1;
a cutting unit 802, configured to cut a source video into two parts to obtain a first video and a second video, where the first video includes a front K frame image in the source video, the second video includes a rear N frame image in the source video, K and N are integers greater than 1, and M is equal to the sum of K and N;
and a splicing unit 803, configured to perform overlapping splicing on the first video and the second video according to the sequence of the second video and the first video, so as to obtain a third video, where the third video includes M-L frame images, L is the number of frames in which the first video and the second video are overlapped, L is an integer greater than 1, and K and N are integers greater than L.
In one embodiment, the splicing unit 803 is specifically configured to:
setting the first N-L frame images in the second video as the first N-L frame images of the third video;
setting a rear K-L frame image in the first video as a rear K-L frame image of a third video;
and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, … and L.
In one embodiment, the proportion of the second video in the middle L frame image of the third video decreases in the playing order, and the proportion of the first video in the middle L frame image of the third video increases in the playing order.
In one embodiment, the video processing apparatus further comprises:
an adjusting unit 804, configured to perform overlapping splicing on the first video and the second video according to the sequence of the second video and the first video by the splicing unit 803, and after a third video is obtained, adjust the frame rate of the third video to a threshold frame rate according to the frame number and the playing duration of the third video, so as to obtain a dynamic material;
a generating unit 805 configured to generate an animation from the moving material and the first image in a case where a generation instruction for generating an animation from the moving material and the first image is detected, which is input by a user.
In one embodiment, the video processing apparatus further comprises:
a setting unit 806, configured to set the animation as the wallpaper when detecting a setting instruction for setting the animation as the wallpaper input by the user.
In one embodiment, the video processing apparatus further comprises:
a converting unit 807 for converting the format of the third video into the set format in the case where the format of the third video is not the set format.
The detailed descriptions of the obtaining unit 801, the cutting unit 802, the splicing unit 803, the adjusting unit 804, the generating unit 805, the setting unit 806, and the converting unit 807 can be directly obtained by referring to the related descriptions in the method embodiments shown in fig. 2 and fig. 4, which are not repeated herein.
Referring to fig. 9, fig. 9 is a schematic structural diagram of another video processing apparatus according to an embodiment of the disclosure. As shown in fig. 9, the video processing apparatus may include: a memory 901, a transceiver 902, and a processor 903 coupled to the memory 901 and the transceiver 902. The memory 901 is used to store computer programs comprising program instructions, the processor 903 is used to execute the program instructions stored by the memory 901, and the transceiver 902 is used to communicate with other devices under the control of the processor 903. The video processing method may be performed according to program instructions when the processor 903 executes the instructions.
The processor 903 may be a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, templates, and circuits described in connection with the disclosure. The processor 903 may also be a combination of computing devices, e.g., a combination comprising one or more microprocessors, DSPs, and microprocessors. The transceiver 902 may be a communication interface, a transceiver circuit, etc., wherein the communication interface is a generic term and may include one or more interfaces, such as an interface between a video processing apparatus and a terminal.
Optionally, the video processing apparatus may further include a bus 904, wherein the memory 901, the transceiver 902, and the processor 903 may be connected to each other through the bus 904. The bus 904 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 904 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
In addition to the memory 901, the transceiver 902, the processor 903 and the bus 904 shown in fig. 9, the video processing apparatus in the embodiment may further include other hardware according to the actual function of the video processing apparatus, which is not described again.
The embodiment of the invention also discloses a storage medium, wherein the storage medium is stored with a program, and when the program runs, the video processing method shown in the figures 2 and 4 is realized.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), or the like.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (10)

1. A video processing method, comprising:
acquiring a source video, wherein the source video comprises M frames of images, and M is an integer greater than 1;
cutting the source video into two parts to obtain a first video and a second video, wherein the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers more than 1, and M is equal to the sum of K and N;
and overlapping and splicing the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of overlapped frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L.
2. The method of claim 1, wherein the overlapping and splicing the first video and the second video in the order of the second video and the first video to obtain a third video comprises:
setting the first N-L frame images in the second video as the first N-L frame images of a third video;
setting a rear K-L frame image in the first video as a rear K-L frame image of the third video;
and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, …, and L.
3. The method according to claim 2, wherein the proportion of the second video in the middle L-frame image of the third video decreases in the playing order, and the proportion of the first video in the middle L-frame image of the third video increases in the playing order.
4. The method according to any one of claims 1-3, wherein after said overlapping and splicing said first video and said second video in the order of said second video and said first video to obtain a third video, said method further comprises:
adjusting the frame rate of the third video to a threshold frame rate according to the frame number and the playing duration of the third video to obtain a dynamic material;
and under the condition that a generation instruction which is input by a user and used for generating the dynamic material and the first image into the animation is detected, generating the animation according to the dynamic material and the first image.
5. The method of claim 4, further comprising:
and when a setting instruction for setting the animation as the wallpaper input by the user is detected, setting the animation as the wallpaper.
6. The method according to any one of claims 1-3, further comprising:
and converting the format of the third video into a set format if the format of the third video is not the set format.
7. A video processing apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a source video, the source video comprises M frames of images, and M is an integer greater than 1;
the source video is divided into two parts to obtain a first video and a second video, the first video comprises a front K frame image in the source video, the second video comprises a rear N frame image in the source video, K and N are integers larger than 1, and M is equal to the sum of K and N;
and the splicing unit is used for performing overlapping splicing on the first video and the second video according to the sequence of the second video and the first video to obtain a third video, wherein the third video comprises M-L frame images, L is the number of frames of the first video and the second video, L is an integer larger than 1, and K and N are integers larger than L.
8. The apparatus according to claim 7, wherein the splicing unit is specifically configured to:
setting the first N-L frame images in the second video as the first N-L frame images of a third video;
setting a rear K-L frame image in the first video as a rear K-L frame image of the third video;
and combining the N-L + i frame image in the second video and the i frame image in the first video into the N-L + i frame image of the third video to obtain an intermediate L frame image of the third video, wherein i is 1,2, …, and L.
9. Video processing apparatus comprising a processor and a memory, the memory storing a set of computer program code, the processor implementing the method of any one of claims 1 to 6 by executing the computer program code stored by the memory.
10. A computer-readable storage medium, in which a computer program or computer instructions are stored which, when executed, implement the method according to any one of claims 1-6.
CN202010126854.XA 2020-02-28 2020-02-28 Video processing method and device and computer readable storage medium Pending CN111343500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010126854.XA CN111343500A (en) 2020-02-28 2020-02-28 Video processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010126854.XA CN111343500A (en) 2020-02-28 2020-02-28 Video processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111343500A true CN111343500A (en) 2020-06-26

Family

ID=71185537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010126854.XA Pending CN111343500A (en) 2020-02-28 2020-02-28 Video processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111343500A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872700A (en) * 2015-11-30 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for realizing seamless circulation of startup video
WO2018125590A1 (en) * 2016-12-30 2018-07-05 Tivo Solutions Inc. Advanced trick-play modes for streaming video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872700A (en) * 2015-11-30 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for realizing seamless circulation of startup video
WO2018125590A1 (en) * 2016-12-30 2018-07-05 Tivo Solutions Inc. Advanced trick-play modes for streaming video

Similar Documents

Publication Publication Date Title
US11943486B2 (en) Live video broadcast method, live broadcast device and storage medium
CN110519638B (en) Processing method, processing device, electronic device, and storage medium
CN110121098B (en) Video playing method and device, storage medium and electronic device
CN110070896B (en) Image processing method, device and hardware device
CN106804003B (en) Video editing method and device based on ffmpeg
CN111562895B (en) Multimedia information display method and device and electronic equipment
CN106507200B (en) Video playing content insertion method and system
WO2018090704A1 (en) Image processing method, apparatus and electronic equipment
WO2019227429A1 (en) Method, device, apparatus, terminal, server for generating multimedia content
CN110781349A (en) Method, equipment, client device and electronic equipment for generating short video
EP4131983A1 (en) Method and apparatus for processing three-dimensional video, readable storage medium, and electronic device
CN114222196A (en) Method and device for generating short video of plot commentary and electronic equipment
CN108495041B (en) Image processing and displaying method and device for electronic terminal
CN113822972A (en) Video-based processing method, device and readable medium
JP2020028096A (en) Image processing apparatus, control method of the same, and program
CN112637675A (en) Video generation method and device, electronic equipment and storage medium
CN113055730B (en) Video generation method, device, electronic equipment and storage medium
CN111343503A (en) Video transcoding method and device, electronic equipment and storage medium
CN112929728A (en) Video rendering method, device and system, electronic equipment and storage medium
CN111343500A (en) Video processing method and device and computer readable storage medium
CN116847147A (en) Special effect video determining method and device, electronic equipment and storage medium
CN111290822B (en) Desktop wallpaper display method and device and computer readable storage medium
CN113905254A (en) Video synthesis method, device, system and readable storage medium
CN113139090A (en) Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN112449249A (en) Video stream processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626