CN112165632B - Video processing method, device and equipment - Google Patents

Video processing method, device and equipment Download PDF

Info

Publication number
CN112165632B
CN112165632B CN202011034296.0A CN202011034296A CN112165632B CN 112165632 B CN112165632 B CN 112165632B CN 202011034296 A CN202011034296 A CN 202011034296A CN 112165632 B CN112165632 B CN 112165632B
Authority
CN
China
Prior art keywords
image
target
video
special effect
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011034296.0A
Other languages
Chinese (zh)
Other versions
CN112165632A (en
Inventor
王启明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011034296.0A priority Critical patent/CN112165632B/en
Publication of CN112165632A publication Critical patent/CN112165632A/en
Application granted granted Critical
Publication of CN112165632B publication Critical patent/CN112165632B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The embodiment of the disclosure provides a video processing method, a video processing device and video processing equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a video frame image in a shot video stream collected by a collecting device, coding a first target image obtained based on the video frame image to obtain coded video data, copying the video frame image to obtain a second target image if the video frame image is the video frame image for previewing, wherein the video frame image for previewing is obtained by performing frame extraction on the shot video stream, superposing a first target special effect on the second target image, and rendering and displaying the superposed image as a preview video stream. The embodiment improves the shooting speed of the image, also improves the playing effect of the video synthesized in the later period, and further improves the use experience of the user.

Description

Video processing method, device and equipment
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to a video processing method, a video processing device and video processing equipment.
Background
With the popularization of terminal devices, the shooting function provided by the terminal device brings great convenience to users, and when an image is shot, in order to improve the shooting effect, a special effect can be added to the shot image.
In the prior art, in the process of shooting a photo or a video by using a terminal device, in order to provide more special effect selection space for a user, a shot special effect can be previewed and displayed on the terminal device in real time.
However, when the special effect is previewed and displayed on the terminal device, only after one frame of image is shot and the special effect preview of the image is completed, the next frame of image can be shot, so that the shooting rate of the image is reduced, the playing effect of the video synthesized in the later stage is possibly influenced, and the use experience of the user is further influenced.
Disclosure of Invention
The embodiment of the disclosure provides a video processing method, a video processing device and video processing equipment, which are used for improving the shooting rate of an image.
In a first aspect, an embodiment of the present disclosure provides a video processing method, including:
acquiring a video frame image in a shooting video stream acquired by acquisition equipment;
coding a first target image obtained based on the video frame image to obtain coded video data;
if the video frame image is used for previewing, copying the video frame image to obtain a second target image; the video frame image for previewing is obtained by performing frame extraction on the shot video stream;
and overlaying the first target special effect to the second target image, and rendering and displaying the image obtained after overlaying as a preview video stream.
In a second aspect, an embodiment of the present disclosure provides a video processing apparatus, including:
the acquisition module is used for acquiring video frame images in the shot video stream acquired by the acquisition equipment;
the processing module is used for coding a first target image obtained based on the video frame image to obtain coded video data;
the processing module is further configured to copy the video frame image to obtain a second target image if the video frame image is a video frame image for preview; the video frame image for previewing is obtained by frame extraction of the shooting video stream;
the processing module is further configured to superimpose the first target special effect on the second target image and render and display an image obtained after the superimposition as a preview video stream.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video processing method as described above in relation to the first aspect and the various possible references to the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the video processing method according to the first aspect and various possible references of the first aspect are implemented.
After the scheme is adopted, a video frame image in a shooting video stream collected by a collecting device can be obtained firstly, then a first target image obtained based on the video frame image can be coded to obtain coded video data, if the video frame image is a video frame image for previewing, the video frame image is copied firstly to obtain a second target image, then a first target special effect is superposed into the second target image, and the superposed image is used as a preview video stream to be rendered and displayed.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a conventional shooting method provided in the embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic application diagram of a first target special effect selection interface provided in an embodiment of the present disclosure;
fig. 4 is an application schematic diagram of a special effect shooting interface provided in the embodiment of the present disclosure;
fig. 5 is a schematic flow chart illustrating a video processing method according to another embodiment of the present disclosure;
fig. 6 is a schematic view of an application of video uploading provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a video processing method according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the prior art, in the process of shooting a photo or a video by using a terminal device, in order to provide more special effect selection space for a user, a shot special effect can be previewed and displayed on the terminal device in real time, for example, after the user selects a special effect, the effect after the special effect is applied can be displayed on the terminal device in real time, and if the special effect is not satisfactory to the user, the user can also select other special effects for previewing. Fig. 1 is a schematic flow diagram of an existing shooting method provided by an embodiment of the present disclosure, and as shown in fig. 1, when a special effect is displayed on a terminal device in a preview manner, only after one frame of image is shot and the special effect preview of the image is completed, the next frame of image can be shot, so that the shooting rate of the image is reduced, and the playing effect of a video synthesized in a later period may be affected, for example, a phenomenon that a playing picture of the synthesized video is stuck due to a small number of shooting frames, so that the smoothness of the picture playing is reduced, and further the use experience of a user is affected.
Based on the above problems, the present disclosure divides the image frames for implementing encoding and the image frames for implementing preview display into two threads for processing, thereby avoiding the situation that the shooting rate of the image is limited due to the preview display of the image, improving the shooting rate of the image, also improving the playing effect of the later-stage synthesized video, and further achieving the technical effect of improving the user experience.
Fig. 2 is a schematic flow diagram of a video processing method provided in the embodiment of the present disclosure, and as shown in fig. 2, the method of the embodiment may be applied to a terminal device, and the terminal device may be a smart phone, a tablet, a personal computer, and the like, which can implement both photographing or video recording and can apply a special effect. In addition, a special effect shooting APP can be installed on the terminal equipment to achieve a corresponding special effect. As shown in fig. 2, the video processing method includes:
s201: and acquiring a video frame image in the shooting video stream acquired by the acquisition equipment.
In this embodiment, the acquisition device may be a camera, may be disposed on the front side of the terminal device, may also be disposed on the rear side of the terminal device, and the number may be set according to the model or model customization of the terminal device.
Further, when the user wants to shoot a video, the shooting can be achieved by touching and operating a relevant shooting control (e.g., a shooting button). Specifically, the terminal device may respond to the first touch operation, control the capture device to capture, and obtain a video frame image in a captured video stream captured by the capture device. The first touch operation may be a single click operation, a multi-click operation, or a long-press operation. The operation times of the multiple click operations can be set according to the actual application scene in a self-defined manner, and exemplarily, the operation times can be two-click operations or three-click operations. The operation duration of the long press operation can also be set according to the actual application scene, and exemplarily, the operation duration can be longer than 3 seconds.
In addition, the shooting frame rate of the shooting video stream collected by the collecting device is related to the maximum shooting frame rate that can be supported by the terminal device, and generally can reach 120 frames per second.
S202: and carrying out coding processing on a first target image obtained based on the video frame image to obtain coded video data.
In this embodiment, after the video frame image is obtained, the video frame image may be stored in the buffer space first, and then the next frame image is captured, and then the image is processed, and the image may be directly obtained from the buffer space for processing, and it is not necessary to wait for the characteristic preview to be completed before the next frame image is captured, so that the capturing rate of the image is improved.
The video frame image may be stored in a first buffer space, the first target image for encoding subsequently may be stored in a second buffer space, and the second target image for preview display subsequently may be stored in a third buffer space.
In addition, because the first target image used for encoding is completely the same as the video frame image stored in the first buffer space, the video frame image can also be directly obtained from the first buffer space for encoding, and the encoded video data can be obtained without additionally arranging a second buffer space.
In addition, the video data obtained by encoding the image may be implemented by using an existing encoding method, and will not be discussed in detail again.
S203: if the video frame image is used for previewing, copying the video frame image to obtain a second target image; wherein, the video frame image for previewing is obtained by frame extraction of the shooting video stream.
In this embodiment, in order to improve the user experience, after the user selects a special effect, the special effect may be previewed and displayed in real time on the terminal device, so that the user may determine whether the special effect is the special effect desired to be applied, and if not, the special effect may be switched in the previewing process to check the application effect of another special effect.
Specifically, in the process of implementing preview display, in order to avoid the situation that the shooting rate is affected by excessive time consumption of special effect rendering, the video frame image may be copied to obtain a second target image for preview, and then the second target image is subjected to preview display processing.
Further, since the preview display can only support a frame rate of only 30 frames per second at the highest, and the frame rate of the acquired video frame image can reach 120 frames per second at the highest, when the second target image of the preview display is obtained, the second target image can be obtained by performing frame extraction on the shooting video stream.
In addition, the frame extraction may be performed on the captured video stream according to a preset extraction condition, which may specifically be: and extracting one frame of image every preset time length or extracting one frame of image every preset frame number. The preset duration and the preset frame number can be set according to actual conditions in a self-defined mode. For example, the preset frame number may be 3 frames, that is, one frame of image is extracted every 3 frames, and assuming that the highest frame number that the acquisition device can shoot is 120 frames per second, one frame is extracted every 3 frames to obtain a second target image, and the second target image is stored in the third buffer space, so that the second target image of 30 frames can be finally sent into the third buffer space per second.
In addition, the effect selected by the user at the time of preview may be the first target effect. The first target special effect may be a sticker special effect, a flip special effect, a filter special effect, a beautification special effect, a music selection special effect, a countdown special effect, or the like, or may be a combination of these special effects, and exemplarily, the first target special effect is a combination of the sticker special effect and the filter special effect.
In addition, the first target special effect control may be selected according to a second touch operation triggered by the user, and specifically, the terminal device may select the first target special effect corresponding to the second touch operation in response to the second touch operation triggered by the user. The second touch operation may be a single click operation, a multiple click operation, or a long press operation, and is not limited to the first touch operation, and may be the same as or different from the first touch operation.
Further, the second touch operation may be responded after the first touch operation or before the first touch operation, that is, the user may select the first target special effect first and then perform shooting, or perform shooting first and then select the first target special effect, which is not limited herein.
In a specific example, fig. 3 is an application schematic diagram of a first target special effect selection interface provided in the embodiment of the present disclosure, a second touch operation is a single click operation, that is, a click operation, as shown in a in fig. 3, a shooting interface of a terminal device includes four special effects, that is, a special effect control a, a special effect control B, a special effect control C, and a special effect control D, a user clicks the special effect control D, that is, the first target special effect is a special effect corresponding to the special effect control D, as shown in B in fig. 3, a drift-heart-shaped picture special effect corresponding to the special effect control D is displayed on the shooting interface after the user clicks the special effect control D.
S204: and superposing the first target special effect to the second target image, and rendering and displaying the superposed image as a preview video stream.
In this embodiment, after the first target special effect is determined, the first target special effect may be superimposed on the second target image for preview, and then the superimposed image may be rendered and displayed as a preview video stream. Wherein the preview video stream may be a video stream of 30 frames of images per second.
Exemplarily, fig. 4 is an application schematic diagram of a special-effect shooting interface provided in the embodiment of the present disclosure, and the embodiment continues to the embodiment in fig. 3, and further includes a shooting key on the terminal device. In this embodiment, the second touch operation responds after the first touch operation, as shown in a in fig. 4, after the user touches the first target special effect control corresponding to the first target special effect, the shooting key may be continuously touched, as shown in b in fig. 4, and after the user touches the shooting key, shooting of the special effect video is started.
By adopting the scheme, the video frame image in the shot video stream collected by the collecting equipment can be obtained firstly, then the first target image obtained based on the video frame image can be coded to obtain the coded video data, if the video frame image is the video frame image used for previewing, the video frame image is copied firstly to obtain the second target image, then the first target special effect is superposed into the second target image, and the superposed image is used as the preview video stream for rendering and displaying.
In another embodiment, before performing an encoding process on a first target image obtained based on a video frame image to obtain encoded video data, the method may further include:
and acquiring a second target special effect triggered by touch operation.
Fig. 5 is a flowchart illustrating a video processing method according to another embodiment of the disclosure, and as shown in fig. 5, S202 may include:
s501: and superposing the second target special effect to the first target image to obtain a superposed image.
S502: and carrying out coding processing on the superposed images to obtain coded video data.
In this embodiment, when the first target image is encoded, only the first target image may be encoded to obtain video data to which no special effect is applied, and the first target image to which the second target special effect is added may be encoded to obtain video data to which the second target special effect is applied.
The second target special effect is a special effect applied to encoding, and may be the same as the first target special effect or different from the first target special effect. And the second target special effect can be a single special effect or a combination of several special effects. For example, the second target special effect may be a sticker special effect, a flip special effect, a filter special effect, a beautification special effect, a music selection special effect, a countdown special effect, a speed adjustment special effect, or the like, or may be a combination of the filter special effect and the speed adjustment special effect.
In addition, the second target special effect may also be a combination of the first target special effect and the slow motion special effect, and then, the information of the second target special effect corresponding to the second target special effect is superimposed on the first target image to obtain a superimposed image, which may include:
and superposing the first target special effect to the first target image to obtain a superposed initial image.
And carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a superposed image.
Specifically, if the second target special effect is a combination of the first target special effect and the slow motion special effect, the first target special effect may be superimposed on the first target image to obtain a superimposed initial image. The first target special effect can be any one or more of the above special effects. And then, carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a video stream image containing the first target special effect and the slow motion special effect. The slow motion special effect can achieve the effect of slowing down the video playing speed. Illustratively, the effect of slowing down the video playing by 0.5 times, 1 time, 1.5 times or 3 times can be achieved.
Further, the first target effect and the second target effect may include an effect identification, basic information of the effect, application duration information of the effect (for example, the first target effect is applied from the first time to the second time), and the like.
In the prior art, in an application scene of a slow motion special effect, due to the limitation of the number of shooting frames, when the slow motion special effect is realized, a phenomenon that a picture is played in a slow motion playing process of synthesized video data is likely to be blocked, but in this embodiment, the phenomenon that the picture is played in the slow motion playing process is avoided by increasing the number of shooting frames, exemplarily, the highest number of shooting frames that can be realized in the prior art is 30 frames per second, if 4 times slow playing is adopted, only 30/4=7.5 frames per second of frames may cause the phenomenon that the picture is played in a stuck state, and in this embodiment, the highest number of shooting frames that can be realized is 120 frames per second, if 4 times slow playing is adopted, the number of frames per second can reach 30 frames, so that the playing effect of a video picture is improved, and further, the use experience of a user is improved.
In another embodiment, after performing encoding processing on a first target image obtained based on a video frame image to obtain encoded video data, the method may further include:
responding to the uploading and releasing touch operation, and uploading the coded video data to the network platform.
In this embodiment, after obtaining the encoded video data, the encoded video data may be stored locally, or the encoded video data may be uploaded to a network platform for other users to watch. In addition, other users can also execute operations such as forwarding, praise or collection, and the like, so that the interactivity is improved, and the use experience of the user is further improved.
Specifically, the terminal device may further include an upload release button, and the method corresponding to this embodiment may further include:
and responding to a third touch operation acting on the uploading release key, and uploading the synthesized video data to the network platform.
In this embodiment, after the user finishes shooting the video, the synthesized video data may be uploaded to the network platform through the touch upload release key, where the third touch operation may be a single click operation, a multiple click operation, or a long press operation. Further, the third touch operation is not directly related to the first touch operation and the second touch operation, and the third touch operation may be the same as or different from the first touch operation and the second touch operation.
Fig. 6 is a schematic view of an application of video uploading provided by the embodiment of the present disclosure, as shown in fig. 6, an upload release button is included in a graphical user interface of a terminal device, as shown in a in fig. 6, a user may click the button in a process of touching the button by the user. As shown in b in fig. 6, after the user clicks the button, a display interface after the synthesized video is uploaded and published is displayed, and a text prompt message of "successful work upload" is displayed in the display interface to remind the user that the work upload is successful.
In addition, fig. 7 is a schematic diagram of a principle of a video processing method provided by the embodiment of the disclosure, as shown in fig. 7, in this embodiment, after a frame of video frame image is shot by a capturing device, the video frame image may be stored in a first cache space, then a first target image is obtained based on the video frame image in the first cache space, the first target image is stored in a second cache space, and the first target image in the second cache space is encoded to obtain encoded video data, and further, the encoded video data may be published or uploaded to a network platform. In addition, if the video frame image is a video frame image for previewing, the video frame image is copied to obtain a second target image, the second target image is stored in a third cache space, the first target special effect is superposed into the second target image, and the superposed image is used as a preview video stream to be rendered and displayed.
Fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure, corresponding to the video processing method according to the foregoing embodiment. For ease of illustration, only portions that are relevant to embodiments of the present disclosure are shown. As shown in fig. 8, the apparatus may include:
an obtaining module 801, configured to obtain a video frame image in a captured video stream collected by a collection device.
In this embodiment, the video frame image is stored in a first buffer space, the first target image is stored in a second buffer space, and the second target image is stored in a third buffer space.
A processing module 802, configured to perform encoding processing on a first target image obtained based on the video frame image, so as to obtain encoded video data.
The processing module 802 is further configured to copy the video frame image to obtain a second target image if the video frame image is a video frame image for previewing; wherein the video frame image for previewing is obtained by performing frame extraction on the shooting video stream.
The processing module 802 is further configured to superimpose the first target special effect on the second target image, and render and display an image obtained after the superimposition as a preview video stream.
In another embodiment, the processing module 802 is further configured to:
and acquiring a second target special effect triggered by touch operation.
And superposing the second target special effect to the first target image to obtain a superposed image.
And coding the superposed image to obtain coded video data.
In this embodiment, the second target effect includes the first target effect and a slow motion effect,
the processing module 802 is further configured to:
and adding the first target special effect to the first target image to obtain an initial image after adding.
And carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a superposed image.
In another embodiment, the processing module 802 is further configured to:
responding to an uploading release touch operation, and uploading the coded video data to a network platform.
The device provided in this embodiment may be used to implement the technical solution of the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
Referring to fig. 9, a schematic structural diagram of an electronic device 900 suitable for implementing the embodiment of the present disclosure is shown, where the electronic device 900 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 900 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 901, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage device 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are also stored. The processing apparatus 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
Generally, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication means 909 may allow the electronic apparatus 900 to communicate with other apparatuses wirelessly or by wire to exchange data. While fig. 9 illustrates an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 909, or installed from the storage device 908, or installed from the ROM 902. The computer program, when executed by the processing device 901, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided a video processing method, including:
and acquiring a video frame image in the shooting video stream acquired by the acquisition equipment.
And coding the first target image obtained based on the video frame image to obtain coded video data.
If the video frame image is used for previewing, copying the video frame image to obtain a second target image; wherein the video frame image for previewing is obtained by performing frame extraction on the shot video stream.
And superposing the first target special effect to the second target image, and rendering and displaying an image obtained after superposition as a preview video stream.
According to one or more embodiments of the present disclosure, the video frame image is stored in a first buffer space, the first target image is stored in a second buffer space, and the second target image is stored in a third buffer space.
According to one or more embodiments of the present disclosure, before the encoding processing is performed on the first target image obtained based on the video frame image, to obtain encoded video data, the method further includes:
and acquiring a second target special effect triggered by touch operation.
Then, the encoding the first target image obtained based on the video frame image to obtain encoded video data includes:
and superposing the second target special effect to the first target image to obtain a superposed image.
And coding the superposed image to obtain coded video data.
In accordance with one or more embodiments of the present disclosure, the second target effect includes the first target effect and a slow motion effect,
the superimposing the second target special effect information corresponding to the second target special effect to the first target image to obtain a superimposed image, including:
and superposing the first target special effect to the first target image to obtain a superposed initial image.
And carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a superposed image.
According to one or more embodiments of the present disclosure, after the encoding processing is performed on the first target image obtained based on the video frame image, obtaining encoded video data, the method further includes:
responding to an uploading release touch operation, and uploading the coded video data to a network platform.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided a video processing apparatus including:
and the acquisition module is used for acquiring the video frame images in the shooting video stream acquired by the acquisition equipment.
And the processing module is used for coding a first target image obtained based on the video frame image to obtain coded video data.
The processing module is further configured to copy the video frame image to obtain a second target image if the video frame image is a video frame image for preview. Wherein the video frame image for previewing is obtained by performing frame extraction on the shooting video stream.
The processing module is further configured to superimpose the first target special effect on the second target image and render and display an image obtained after the superimposition as a preview video stream.
According to one or more embodiments of the present disclosure, the video frame image is stored in a first buffer space, the first target image is stored in a second buffer space, and the second target image is stored in a third buffer space.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
and acquiring a second target special effect triggered by touch operation.
And superposing the second target special effect to the first target image to obtain a superposed image.
And coding the superposed image to obtain coded video data.
According to one or more embodiments of the present disclosure, the second target effect includes the first target effect and a slow motion effect, and the processing module is further configured to:
and superposing the first target special effect to the first target image to obtain a superposed initial image.
And carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a superposed image.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
responding to an uploading release touch operation, and uploading the coded video data to a network platform.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer execution instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the first aspect and various possible aspects of the first aspect described above relating to the video processing method.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the first aspect and various possible aspects of the first aspect relating to the video processing method.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (8)

1. A video processing method, comprising:
acquiring a video frame image in a shooting video stream acquired by acquisition equipment, wherein the video frame image is stored in a first cache space;
coding a first target image obtained based on the video frame image to obtain coded video data, wherein the first target image is stored in a second cache space;
if the video frame image is used for previewing, copying the video frame image to obtain a second target image; the video frame image for previewing is obtained by performing frame extraction on the shooting video stream, and the second target image is stored in a third cache space;
and overlaying the first target special effect to the second target image, and rendering and displaying the image obtained after overlaying as a preview video stream.
2. The method according to claim 1, further comprising, before said encoding the first target image obtained based on the video frame image to obtain encoded video data:
acquiring a second target special effect triggered by touch operation;
then, the encoding processing of the first target image obtained based on the video frame image to obtain encoded video data includes:
superposing the second target special effect to the first target image to obtain a superposed image;
and coding the superposed image to obtain coded video data.
3. The method of claim 2, wherein the second target effect includes the first target effect and a slow motion effect,
the superimposing the second target special effect information corresponding to the second target special effect to the first target image to obtain a superimposed image, including:
superposing the first target special effect to the first target image to obtain a superposed initial image;
and carrying out slow motion special effect processing on the superposed initial image according to the slow motion special effect to obtain a superposed image.
4. The method according to any one of claims 1 to 3, further comprising, after said encoding the first target image obtained based on the video frame image to obtain encoded video data:
responding to an uploading release touch operation, and uploading the coded video data to a network platform.
5. A video processing apparatus, comprising:
the acquisition module is used for acquiring video frame images in a shooting video stream acquired by acquisition equipment, and the video frame images are stored in a first cache space;
the processing module is used for coding a first target image obtained based on the video frame image to obtain coded video data, and the first target image is stored in a second cache space;
the processing module is further configured to copy the video frame image to obtain a second target image if the video frame image is a video frame image for preview; the video frame image for previewing is obtained by performing frame extraction on the shooting video stream, and the second target image is stored in a third cache space;
the processing module is further configured to superimpose the first target special effect on the second target image and render and display an image obtained after the superimposition as a preview video stream.
6. The apparatus of claim 5, wherein the processing module is further configured to:
acquiring a second target special effect triggered by touch operation;
superposing the second target special effect to the first target image to obtain a superposed image;
and coding the superposed image to obtain coded video data.
7. An electronic device, comprising: at least one processor and a memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the video processing method of any of claims 1 to 4.
8. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, implement the video processing method of any one of claims 1 to 4.
CN202011034296.0A 2020-09-27 2020-09-27 Video processing method, device and equipment Active CN112165632B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011034296.0A CN112165632B (en) 2020-09-27 2020-09-27 Video processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011034296.0A CN112165632B (en) 2020-09-27 2020-09-27 Video processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN112165632A CN112165632A (en) 2021-01-01
CN112165632B true CN112165632B (en) 2022-10-04

Family

ID=73861329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011034296.0A Active CN112165632B (en) 2020-09-27 2020-09-27 Video processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN112165632B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810640A (en) * 2021-08-12 2021-12-17 荣耀终端有限公司 Video processing method and device and electronic equipment
CN113760161A (en) * 2021-08-31 2021-12-07 北京市商汤科技开发有限公司 Data generation method, data generation device, image processing method, image processing device, equipment and storage medium
CN113747240B (en) * 2021-09-10 2023-04-07 荣耀终端有限公司 Video processing method, apparatus and storage medium
CN113938587B (en) * 2021-09-14 2024-03-15 青岛海信移动通信技术有限公司 Double-camera-based shooting method and electronic equipment
CN114125555B (en) * 2021-11-12 2024-02-09 深圳麦风科技有限公司 Editing data preview method, terminal and storage medium
CN114979495B (en) * 2022-06-28 2024-04-12 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN115767141A (en) * 2022-08-26 2023-03-07 维沃移动通信有限公司 Video playing method and device and electronic equipment
CN115643406A (en) * 2022-10-11 2023-01-24 腾讯科技(深圳)有限公司 Video decoding method, video encoding device, storage medium, and storage apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101926018B1 (en) * 2016-08-12 2018-12-06 라인 가부시키가이샤 Method and system for video recording
CN108111752A (en) * 2017-12-12 2018-06-01 北京达佳互联信息技术有限公司 video capture method, device and mobile terminal
CN108156520B (en) * 2017-12-29 2020-08-25 珠海市君天电子科技有限公司 Video playing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112165632A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
CN112165632B (en) Video processing method, device and equipment
WO2022048478A1 (en) Multimedia data processing method, multimedia data generation method, and related device
CN112261226B (en) Horizontal screen interaction method and device, electronic equipment and storage medium
WO2021203996A1 (en) Video processing method and apparatus, and electronic device, and non-transitory computer readable storage medium
CN110475065B (en) Image processing method and device, electronic equipment and storage medium
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
CN112653920B (en) Video processing method, device, equipment and storage medium
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
CN112035046B (en) Method and device for displaying list information, electronic equipment and storage medium
US20240121349A1 (en) Video shooting method and apparatus, electronic device and storage medium
WO2023169305A1 (en) Special effect video generating method and apparatus, electronic device, and storage medium
CN112351222A (en) Image special effect processing method and device, electronic equipment and computer readable storage medium
CN110881104A (en) Photographing method, photographing device, storage medium and terminal
CN116934577A (en) Method, device, equipment and medium for generating style image
CN114584716A (en) Picture processing method, device, equipment and storage medium
US20230421857A1 (en) Video-based information displaying method and apparatus, device and medium
CN115002359A (en) Video processing method and device, electronic equipment and storage medium
CN115114463A (en) Media content display method and device, electronic equipment and storage medium
CN114860139A (en) Video playing method, video playing device, electronic equipment, storage medium and program product
CN115022696A (en) Video preview method and device, readable medium and electronic equipment
GB2600341A (en) Image special effect processing method and apparatus, electronic device and computer-readable storage medium
WO2022213798A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN115474085B (en) Media content playing method, device, equipment and storage medium
WO2023185968A1 (en) Camera function page switching method and apparatus, electronic device, and storage medium
CN117676047A (en) Special effect processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant