CN110913118B - Video processing method, device and storage medium - Google Patents

Video processing method, device and storage medium Download PDF

Info

Publication number
CN110913118B
CN110913118B CN201811080449.8A CN201811080449A CN110913118B CN 110913118 B CN110913118 B CN 110913118B CN 201811080449 A CN201811080449 A CN 201811080449A CN 110913118 B CN110913118 B CN 110913118B
Authority
CN
China
Prior art keywords
image frame
video
image
transparency
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811080449.8A
Other languages
Chinese (zh)
Other versions
CN110913118A (en
Inventor
张伟
柯灵杰
夏胜飞
赵桢阳
黄庆然
杨宗
朱海波
陈通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Tianjin Co Ltd
Original Assignee
Tencent Cyber Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Tianjin Co Ltd filed Critical Tencent Cyber Tianjin Co Ltd
Priority to CN201811080449.8A priority Critical patent/CN110913118B/en
Publication of CN110913118A publication Critical patent/CN110913118A/en
Application granted granted Critical
Publication of CN110913118B publication Critical patent/CN110913118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application discloses a video processing method, which comprises the following steps: receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame; caching the first image frame, and if a second image frame corresponding to the first image frame exists in the cache, synthesizing the first image frame and the second image frame in the cache to obtain a synthesized image frame; providing the composite image frame to a presentation device for presentation of the composite image frame in a video preview interface. The application also provides a corresponding device and a storage medium.

Description

Video processing method, device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method, an apparatus, and a storage medium.
Background
At present, more and more video pictures are pursuing characteristics, and various gorgeous special effects are produced. The special effect commonly used in video shooting is only to perform image processing on a currently captured video picture, such as adding a filter, adding a face pendant, and the like. For adding the special effect of the afterimage in the video, after the video is shot, the special effect of the afterimage is obtained by processing the video frame when the video is edited for the second time, and the special effect of the afterimage cannot be previewed in real time when the video is shot. The special afterimage effect refers to that in one frame of picture, the image of the object in the current frame of picture can be seen, and the image of the object with gradually-lightened color in the previous frame of picture can be seen.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present application provide a video processing method, an apparatus, and a storage medium, so as to display a special effect of an afterimage in real time during video shooting.
The embodiment of the application provides a video processing method, which comprises the following steps:
receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame;
the first image frame is buffered and the second image frame is buffered,
if a second image frame corresponding to the first image frame exists in the cache, the first image frame and the second image frame in the cache are synthesized to obtain a synthesized image frame;
providing the composite image frame to a presentation device for presentation of the composite image frame in a video preview interface.
An embodiment of the present application further provides a video processing apparatus, where the apparatus includes:
the receiving unit is used for receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame;
a buffer unit for buffering the first image frame,
the combining unit is used for combining the first image frame and the second image frame in the cache to obtain a combined image frame if the second image frame corresponding to the first image frame exists in the cache;
and the sending unit is used for providing the composite image frame to a display device so as to display the composite image frame in a video preview interface.
Embodiments of the present application also provide a storage medium storing computer-readable instructions, which can cause at least one processor to execute the method as described above.
By adopting the scheme provided by the application, the acquired frame images in the video stream are stored in the cache, and meanwhile, the acquired frame images are synthesized with the previously acquired images in the cache and are previewed in real time, so that the video images with the ghost effect can be displayed in real time during video acquisition.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram of a system architecture to which embodiments of the present application relate;
FIG. 2 is a schematic flow chart diagram of a video processing method in some embodiments of the present application;
FIG. 3 is a schematic flow chart diagram of a video processing method in some embodiments of the present application;
FIG. 4A is a video capture interface diagram in some embodiments of the present application;
FIG. 4B is a video preview interface diagram in some embodiments of the present application;
FIG. 5 is a schematic diagram of a video processing apparatus according to some embodiments of the present application; and
fig. 6 is a schematic diagram of a computing device composition structure in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a video processing method, a video processing device and a storage medium. Fig. 1 is a system architecture diagram according to an embodiment of the present application, and as shown in fig. 1, the system architecture includes a terminal device 104 (e.g., terminal devices 104a-c), each user captures a video by executing a client application 108 (e.g., client applications 108a-c) on the terminal device 104, where the client application receives a video stream captured by a video capture apparatus through an interface, performs real-time ghosting processing and previewing on each image frame in the video stream, and can display the processed image by sending the processed image to a display apparatus, for example, a display screen of the terminal device 104, so that a ghosting effect can be viewed while capturing the video, and the user can record the video with reference to the ghosting effect conveniently. The client application 108 may be a video application, such as a live video client, a american-shooting application client, or a tremble application client. In some embodiments, the system architecture may further include the server 102, and after the video shooting is completed, the client application 108 may further send the shot video carrying the afterimage effect to the server 102, and the server 102 sends the video to other client applications 108, so that users corresponding to the other client applications 108 may view the video carrying the afterimage effect. For example, the server 102 may send a video stream carrying the ghosting effect to the client application 108 corresponding to the live viewer.
The server 102 provides a video service, for example, the server 102 is a background server corresponding to a tremble application or a background server corresponding to a american shoot application. End devices 104 (e.g., end devices 104a-c) are connected to server 102 via one or more networks 106.
Examples of terminal device 104 include, but are not limited to, a palmtop computer, a wearable computing device, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a mobile phone, a smartphone, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these or other data processing devices.
Examples of the one or more networks 106 include a Local Area Network (LAN) and a Wide Area Network (WAN) such as the internet. In some embodiments, one or more of the networks 106 may be implemented using any well-known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, WiFi, Voice over IP (VoIP), Wi-MAX, or any other suitable communication protocol.
Each terminal device 104 includes one or more internal peripheral modules or may be connected to one or more peripheral devices (e.g., navigation systems, health monitors, climate controllers, smart sports equipment, bluetooth headsets, smart watches, etc.) via wires or wirelessly.
An embodiment of the present application provides a video processing method, as shown in fig. 2, the method includes:
s201: receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame.
S202: the first image frame is buffered.
In some embodiments, prior to buffering the first image frame, the method further comprises: reducing a resolution of the first image frame.
In some embodiments, prior to buffering the first image frame: carrying out compression coding on the first image frame to obtain a coded first image frame;
the buffering the first image frame comprises: and storing the encoded first image frame in the buffer.
In some embodiments, the method further comprises:
and when the time difference between the acquisition time of the image frame in the cache and the current time reaches a preset time difference value, deleting the image frame from the cache.
S203: and if a second image frame corresponding to the first image frame exists in the cache, the first image frame and the second image frame in the cache are synthesized to obtain a synthesized image frame.
In some embodiments, the second image frame in the buffer is an encoded second image frame, and before the combining the first image frame with the second image frame in the buffer, the method further comprises:
and decoding the encoded second image frame to obtain the second image frame.
In some embodiments, said combining said first image frame with said second image frame in said buffer comprises:
determining a transparency of the second image frame;
and synthesizing the first image frame and the second image frame according to the transparency of the second image frame.
In some embodiments, said determining the transparency of the second image frame comprises:
acquiring the acquisition time of the second image frame;
and determining the transparency of the second image frame according to the acquisition time of the second image frame.
In some embodiments, the second image frame is a plurality of frames, and the combining the first image frame and the second image frame according to the transparency of the second image frame includes:
sorting the plurality of second image frames according to respective acquisition times;
and according to the transparency of each second image frame, sequentially superposing each second image to the first image frame according to the sequence.
In some embodiments, before the combining the first image frame with the second image frame in the buffer, the method further comprises:
and performing blurring processing on the second image frame in the buffer.
S204: providing the composite image frame to a presentation device for presentation of the composite image frame in a video preview interface.
In some embodiments, the method further comprises: when a second image frame corresponding to the first image frame does not exist in the cache, providing the first image frame to the display device so as to display the first image frame in a video preview interface.
By adopting the scheme provided by the application, the acquired frame images in the video stream are stored in the cache, and meanwhile, the acquired frame images are synthesized with the previously acquired images in the cache and are previewed in real time, so that the video images with the ghost effect can be displayed in real time during video acquisition.
Another embodiment of the present application provides a video processing method, as shown in fig. 3, including:
s301: receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame.
The video acquisition device may be a camera, which may be an integrated camera on the terminal device 104 or an external camera. The method comprises the steps of collecting video streams through a camera, wherein the video streams comprise a plurality of image frames which are sequenced according to collecting time, and the first image frame is a currently collected image frame. The client application 108 obtains the video stream captured by the video capture device through the interface.
As shown in fig. 4A, in the video capture interface 40, when a user clicks a control 41, a template culling box 44 is displayed, and a plurality of templates 43 are displayed in the template culling box 44, where the templates are video shooting templates and some of the video shooting templates carry a special afterimage effect. For example, the template 2 carries the afterimage effect, and when the user selects the template 2 and clicks the control 42, the video with the afterimage effect is shot.
The following steps S302 to S303 are all operations before storing the first image frame in the buffer, so as to store the currently acquired first image frame in the buffer after processing, so as to save the buffer space.
S302: reducing a resolution of the first image frame.
The resolution of the image is reduced to reduce the space occupied by the buffer. For example, the image is divided into small squares, and the color values of the pixels in the small squares are averaged to be used as the color values of the corresponding small squares, so that the resolution of the image is reduced.
S303: and carrying out compression coding on the first image frame to obtain a coded first image frame.
And carrying out compression coding on the first image, reducing the storage space occupied by the first image frame, and reducing the memory space occupied by the cache after the first image frame subjected to compression coding is stored in the cache. The original data of the image frame in the collected video stream is a group of RGBA data, and the number of pixels of one image frame has the number of RGBA data values. When the original data of the image frame is stored in the buffer, the data of all the pixels are compressed and encoded instead of storing the data of all the pixels, redundant data are removed, and the compressed data are stored in the buffer. Compression algorithms may include jpeg lossy compression, png lossless compression, and the like.
S304: and storing the encoded first image frame in the buffer.
The cache is a section of storage space in the memory and is used for storing image frames in the shot video stream. The number of stored image frames is set as desired, typically storing a number of image frames of 1-3 seconds.
S305: and if a second image frame corresponding to the first image frame exists in the cache, the first image frame and the second image frame in the cache are synthesized to obtain a synthesized image frame.
In some embodiments, the step S305 may include the following operations:
s3050, searching a second image frame corresponding to the first image frame in the cache.
The synthesizing process is performed for the acquired image frames. And for one image frame acquired currently by the video acquisition device, searching a second image frame acquired before in the cache, and synthesizing the first image frame and the searched second image frame to obtain a synthesized image frame. And displaying the synthesized image frame during real-time preview, so that the previewed image carries an afterimage effect, and video shooting can be performed by referring to the afterimage effect, such as adjusting the angle and position of the video shooting.
In some embodiments, a number N may be set, N being a natural number. And when the number of the second image frames in the buffer memory is less than or equal to N, acquiring all the second image frames in the buffer memory and carrying out synthesis processing. And when the number of the second image frames in the cache is larger than N, selecting N second image frames from the cache for synthesis processing. In this case, the second image frame may be selected in a random manner, or in a fixed time interval manner, or multiple time intervals may be determined according to a time function, and the second image frame may be acquired according to the multiple time intervals. For example, when the number of image frames stored in the buffer memory reaches a certain number, and the composition processing is performed each time, a group of image frames, such as the image frame of the latest 1 second, is searched from the buffer memory, and one image frame is taken every 0.25 seconds, so that a total of 3 image frames is read. And finally, combining the 3 image frames with the current image frame to obtain a combined image frame.
In some embodiments, if the second image frame in the buffer is an encoded image frame, a decoding operation is required before the combining process.
S3051: and decoding the encoded second image frame to obtain the second image frame.
When the image frame stored in the buffer memory is the image frame after compression coding, for the second image frame obtained from the buffer memory, before the first image frame and the second image frame are combined, the second image frame is decoded.
S3052: and performing blurring processing on the second image frame in the buffer.
Before the first image frame and the second image frame are subjected to the combination processing, the second image frame can be subjected to the fuzzification processing, so that the residual image in the combined image frame has a fuzzy effect, and the residual image effect is optimized. In the blurring process, a template may be set for the target pixel on the image frame, the template including its surrounding neighboring pixels, and the average value of all pixels in the template may be used instead of the value of the target pixel, wherein the radius of the template may be set, and the radius of the template is used to determine the size of the pixel range covered by the template. The different second image frames can be processed by adopting the same blurring strength, and the blurring strength of each second image frame can also be determined according to the acquisition time of each second image frame, wherein the blurring strength is different, and the radius of the template subjected to the blurring processing is different.
S3053: determining a transparency of the second image frame.
According to the video processing method, the transparency of the second image frames is set to ensure that the current first image frame can be displayed, and other second image frames have a hidden and appearing afterimage effect. Wherein determining the transparency of the second image frame comprises the steps of:
S3053A: and acquiring the acquisition time of the second image frame.
S3053B: and determining the transparency of the second image frame according to the acquisition time of the second image frame.
When the collected image frames are stored in the cache, the collection time of the image frames can be stored in the cache, and the image frames with earlier collection time have lower transparency, so that the images with longer collection time in the composite image frames are displayed in the composite image frames to be lighter, and the effect that the ghost gradually disappears is achieved.
S3054: and synthesizing the first image frame and the second image frame according to the transparency of the second image frame.
When the first image frame and the second image frame are subjected to synthesis processing, the transparency of the second image frame is assumed to be x, x belongs to [0,1], and when x is 0, the second image frame is opaque; when x is 1, the second image frame is completely transparent. The RGB value of each pixel point in the synthesized image frame is obtained by the following formula (1).
The RGB value of the combined image frame is RGB value x of the second image frame + RGB value x (1-x) of the first image frame. (1)
The second image frame is multiple, and when the step S3054 is performed, steps S3054A-S3054B are included.
S3054A: and sequencing the plurality of second image frames according to the respective acquisition time.
S3054B: and according to the transparency of each second image frame, sequentially superposing each second image frame to the first image frame according to the sequence.
And the later acquisition time, the more the sequencing of the second image frames is advanced, and the second image frames are sequentially superposed on the first image frame according to the sequencing. For example, the current image frame is M, the number of the second image frames acquired from the buffer is four, and the second image frames are A, B, C, D respectively according to the sequence, during the synthesis processing, a and M are synthesized to generate M1, B and M1 are synthesized to generate M2, C and M2 are synthesized to generate M3, D and M3 are synthesized to generate M4, and M4 is displayed as a synthesized image frame. And sequentially superposing the second image frames to the first image frame according to the sequence, so that the effect of ghost shadow following is presented on the synthesized image frame. According to the video processing method, for the currently acquired first image frame, the first image frame is firstly stored in a cache in a memory so as to be conveniently used when the afterimage synthesis processing is carried out on the subsequently acquired image. And for the first image frame, one or more second image frames are obtained from the cache, the second image frames and the first image frame are synthesized to obtain a synthesized image frame, the synthesized image frame is sent to a display device through an interface for displaying, and the synthesized image frame is stored in a hard disk.
S306: providing the composite image frame to a presentation device for presentation of the composite image frame in a video preview interface.
The composite image frame is provided to a presentation device for presentation and the client application 108 may provide the composite image frame to the presentation device via the interface. The display device may be a display device of the terminal device 104, for example, a display screen of the intelligent terminal, or a display device connected to the terminal device. The composite image frame is shown to carry an afterimage effect, for example, when an apple exists in the image, the afterimage of the apple in the composite image frame is shown as the afterimage 45 in fig. 4B.
S307: and when the time difference between the acquisition time of the image frame in the cache and the current time reaches a preset time difference value, deleting the image frame from the cache.
The buffer memory stores a predetermined number of image frames, for example, 1 second image frame, and when the acquisition time of the image frames in the buffer memory is more than one second from the current time, the image frames are deleted, thereby saving the buffer memory space.
S308: when a second image frame corresponding to the first image frame does not exist in the cache, providing the first image frame to the display device so as to display the first image frame in a video preview interface.
When there are no stored image frames in the buffer, for example, the first image frame is the first image frame in the captured video stream, in which case the first image frame is sent directly to the presentation device for presentation.
By adopting the video processing method provided by the embodiment of the application, the composite image frame with the afterimage effect is previewed while the video is shot, and the previewed composite image frame with the afterimage effect is used for guiding the shooting of the video. And carrying out resolution reduction and compression coding processing on the image frames stored in the cache to reduce the memory space occupied by the cache. And deleting the image frames in the cache with the acquisition time exceeding the preset time, thereby further reducing the memory space occupied by the cache. And during image synthesis, setting the transparency of each second image and the sequence of each second image frame according to the acquisition time of the second images in the cache, and carrying out reasonable processing according to the transparency and the sequence, so that the images with longer acquisition time on the synthesized image frames are displayed more lightly and have the effect of ghost following. Further, the second image frame is subjected to blurring processing to optimize the afterimage effect before the combining processing.
An embodiment of the present application further provides a video processing apparatus 500, as shown in fig. 5, the apparatus includes:
a receiving unit 501, configured to receive video stream data acquired by a video acquisition device in real time, where the video stream data includes a first image frame;
a buffer unit 502 for buffering the first image frame,
a combining unit 503, configured to, if a second image frame corresponding to the first image frame exists in the buffer, perform combining processing on the first image frame and the second image frame in the buffer to obtain a combined image frame;
a sending unit 504, configured to provide the composite image frame to a display device for displaying the composite image frame in a video preview interface.
In some examples, the apparatus 500 further includes a processing unit 505 to: reducing the resolution of the first image frame prior to buffering the first image frame.
In some examples, the processing unit 505 is further configured to: prior to buffering the first image frame: and carrying out compression coding on the first image frame to obtain a coded first image frame.
A buffer unit 502, configured to store the encoded first image frame in the buffer;
wherein the second image frame in the buffer is an encoded second image frame, and the combining unit 503 is configured to:
and before the first image frame and the second image frame in the cache are subjected to synthesis processing, decoding the encoded second image frame to obtain the second image frame.
In some examples, the apparatus 500 further includes a deletion unit 506 to: and when the time difference between the acquisition time of the image frame in the cache and the current time reaches a preset time difference value, deleting the image frame from the cache.
In some examples, the sending unit 504 is configured to: when a second image frame corresponding to the first image frame does not exist in the cache, providing the first image frame to the display device so as to display the first image frame in a video preview interface.
In some examples, the synthesis unit 503 to: determining a transparency of the second image frame;
and synthesizing the first image frame and the second image frame according to the transparency of the second image frame.
In some examples, the synthesis unit 503 to: acquiring the acquisition time of the second image frame; and determining the transparency of the second image frame according to the acquisition time of the second image frame.
In some examples, the second image frame is multiple, the synthesizing unit 503 to: sorting the plurality of second image frames according to respective acquisition times; and according to the transparency of each second image frame, sequentially superposing each second image to the first image frame according to the sequence.
In some examples, the processing unit 505 is further configured to: and performing blurring processing on the second image frame in the buffer.
Embodiments of the present application also provide a computer-readable storage medium storing computer-readable instructions, which can cause at least one processor to execute the method described above.
Fig. 6 shows a component configuration diagram of a computing device in which the video processing apparatus is located. As shown in fig. 6, the computing device includes one or more processors (CPUs) 602, a communications module 604, a memory 606, a user interface 610, and a communications bus 608 for interconnecting these components.
The processor 602 may receive and transmit data via the communication module 604 to enable network communications and/or local communications.
The user interface 610 includes one or more output devices 612, including one or more speakers and/or one or more visual displays. The user interface 610 also includes one or more input devices 614, including, for example, a keyboard, a mouse, a voice command input unit or microphone, a touch screen display, a touch sensitive tablet, a gesture capture camera or other input buttons or controls, and the like.
Memory 606 may be high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; or non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
The memory 606 stores a set of instructions executable by the processor 602, including:
an operating system 616, including programs for handling various basic system services and for performing hardware related tasks;
the application 618, which includes various application programs that can implement the process flow in the above examples, may include the apparatus 500 shown in fig. 5, for example. In some examples, the apparatus 500 may include some or all of the modules 501-506 shown in FIG. 5, and each of the modules 501-506 may store machine-executable instructions. The processor 602 can further implement the functions of the modules 501 to 506 by executing the machine-executable instructions in the modules 501 to 506 in the memory 606.
It should be noted that not all steps and modules in the above flows and structures are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The division of each module is only for convenience of describing adopted functional division, and in actual implementation, one module may be divided into multiple modules, and the functions of multiple modules may also be implemented by the same module, and these modules may be located in the same device or in different devices.
The hardware modules in the embodiments may be implemented in hardware or a hardware platform plus software. The software includes machine-readable instructions stored on a non-volatile storage medium. Thus, embodiments may also be embodied as software products.
In various examples, the hardware may be implemented by specialized hardware or hardware executing machine-readable instructions. For example, the hardware may be specially designed permanent circuits or logic devices (e.g., special purpose processors, such as FPGAs or ASICs) for performing the specified operations. Hardware may also include programmable logic devices or circuits temporarily configured by software (e.g., including a general purpose processor or other programmable processor) to perform certain operations.
In addition, each example of the present application can be realized by a data processing program executed by a data processing apparatus such as a computer. It is clear that a data processing program constitutes the present application. Further, the data processing program, which is generally stored in one storage medium, is executed by directly reading the program out of the storage medium or by installing or copying the program into a storage device (such as a hard disk and/or a memory) of the data processing device. Such a storage medium therefore also constitutes the present application, which also provides a non-volatile storage medium in which a data processing program is stored, which data processing program can be used to carry out any one of the above-mentioned method examples of the present application.
The nonvolatile computer-readable storage medium may be a memory provided in an expansion board inserted into the computer or written to a memory provided in an expansion unit connected to the computer. A CPU or the like mounted on the expansion board or the expansion unit may perform part or all of the actual operations according to the instructions.
The nonvolatile computer readable storage medium includes a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. The program code may be downloaded from a server computer by a communication network.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A video processing method is applied to a video live broadcast client, and comprises the following steps:
receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame;
caching the first image frame, and if a second image frame corresponding to the first image frame exists in the cache, performing blurring processing on the second image frame, wherein the blurring processing specifically comprises the following steps:
setting a template for a target pixel on the second image frame;
determining the blurring strength of the second image frame according to the acquisition time of the second image frame;
determining the radius of the template according to the fuzzy degree, wherein the radius is used for determining the size of a pixel range covered by the template, and the average value of all pixels in the template is used for replacing the value of the target pixel;
determining the transparency of the second image frame, and combining the first image frame and the blurred second image frame according to the transparency of the second image frame to obtain a combined image frame;
providing the composite image frame to a display device to display the composite image frame in a video preview interface, and adjusting the angle and the position of a video shot by a video acquisition device according to the previewed image carrying afterimage effect;
after video shooting is finished, sending the shot video stream carrying the afterimage effect to a server, so that the server sends the video stream carrying the afterimage effect to a client corresponding to a live broadcast viewer.
2. The method of claim 1, wherein prior to buffering the first image frame, the method further comprises:
reducing a resolution of the first image frame.
3. The method of claim 1, wherein prior to buffering the first image frame, the method further comprises:
carrying out compression coding on the first image frame to obtain a coded first image frame;
the buffering the first image frame comprises:
storing the encoded first image frame in a buffer memory;
wherein the second image frame in the buffer is the encoded second image frame, and before the blurring process is performed on the second image frame, the method further includes:
and decoding the encoded second image frame to obtain the second image frame.
4. The method of claim 1, further comprising:
and deleting the image frames in the cache when the time difference between the acquisition time of the image frames in the cache and the current time reaches a preset time difference value.
5. The method of claim 1, further comprising:
when a second image frame corresponding to the first image frame does not exist in the buffer memory, providing the first image frame to the display device so as to display the first image frame in a video preview interface.
6. The method of claim 1, wherein receiving the video stream data collected by the video collection device in real-time comprises:
receiving video stream data acquired by a video acquisition device in real time through an interface;
said providing the composite image frame to a presentation device comprises:
and sending the synthesized image frame to the display device through an interface for displaying.
7. The method of claim 1, wherein the determining the transparency of the second image frame comprises:
acquiring the acquisition time of the second image frame;
and determining the transparency of the second image frame according to the acquisition time of the second image frame.
8. The method according to claim 1, wherein the second image frame is a plurality of image frames, and the combining the first image frame and the blurred second image frame according to the transparency of the second image frame comprises:
sequencing the plurality of second image frames according to respective acquisition time;
and according to the transparency of each second image frame, sequentially superposing each second image frame after the blurring treatment on the first image frame according to the sequence.
9. The method of claim 1, wherein when providing the composite image frame to a presentation device, the method further comprises:
and storing the synthesized image frame to a hard disk.
10. A video processing apparatus, comprising:
the receiving unit is used for receiving video stream data acquired by a video acquisition device in real time, wherein the video stream data comprises a first image frame;
the buffer unit is used for buffering the first image frame;
a synthesizing unit, configured to perform blurring processing on a second image frame corresponding to the first image frame if the second image frame exists in the buffer, and specifically includes: setting a template for a target pixel on the second image frame; determining the blurring strength of the second image frame according to the acquisition time of the second image frame; determining the radius of the template according to the fuzzy degree, wherein the radius is used for determining the size of a pixel range covered by the template, and the average value of all pixels in the template is used for replacing the value of the target pixel; determining the transparency of the second image frame, and combining the first image frame and the blurred second image frame according to the transparency of the second image frame to obtain a combined image frame;
the sending unit is used for providing the composite image frame for a display device so as to display the composite image frame in a video preview interface, and adjusting the angle and the position of a video shot by the video acquisition device according to the previewed image carrying the afterimage effect; after video shooting is finished, sending the shot video stream carrying the afterimage effect to a server, so that the server sends the video stream carrying the afterimage effect to a client corresponding to a live broadcast viewer.
11. A computer-readable storage medium storing computer-readable instructions for causing at least one processor to perform the method of any one of claims 1-9.
12. A computing device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, implement the method of any of claims 1 to 9.
CN201811080449.8A 2018-09-17 2018-09-17 Video processing method, device and storage medium Active CN110913118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811080449.8A CN110913118B (en) 2018-09-17 2018-09-17 Video processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811080449.8A CN110913118B (en) 2018-09-17 2018-09-17 Video processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110913118A CN110913118A (en) 2020-03-24
CN110913118B true CN110913118B (en) 2021-12-17

Family

ID=69812798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811080449.8A Active CN110913118B (en) 2018-09-17 2018-09-17 Video processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110913118B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351221B (en) * 2019-08-09 2024-02-13 北京字节跳动网络技术有限公司 Image special effect processing method, device, electronic equipment and computer readable storage medium
CN112999657B (en) * 2021-03-09 2022-11-25 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying phantom of virtual character
CN116055802B (en) * 2022-07-21 2024-03-08 荣耀终端有限公司 Image frame processing method and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905730A (en) * 2014-03-24 2014-07-02 深圳市中兴移动通信有限公司 Shooting method of mobile terminal and mobile terminal
CN104079833A (en) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 Method and device for shooting star orbit videos
CN104104873A (en) * 2014-07-16 2014-10-15 深圳市中兴移动通信有限公司 Orbit shooting method, shooting method of object motion trails and mobile terminal
CN104104798A (en) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 Method for shooting light painting video and mobile terminal
CN104125407A (en) * 2014-08-13 2014-10-29 深圳市中兴移动通信有限公司 Object motion track shooting method and mobile terminal
CN104159033A (en) * 2014-08-21 2014-11-19 深圳市中兴移动通信有限公司 Method and device of optimizing shooting effect
CN104766361A (en) * 2015-04-29 2015-07-08 腾讯科技(深圳)有限公司 Ghosting effect realization method and device
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325051A (en) * 2007-06-15 2008-12-17 先锋高科技(上海)有限公司 Image display device and image display method
CN103716537B (en) * 2013-12-18 2017-03-15 宇龙计算机通信科技(深圳)有限公司 Picture synthesis method and terminal
CN104618572B (en) * 2014-12-19 2018-02-16 广东欧珀移动通信有限公司 The photographic method and device of a kind of terminal
WO2016173794A1 (en) * 2015-04-30 2016-11-03 Fotonation Limited A method and apparatus for producing a video stream
CN108156501A (en) * 2017-12-29 2018-06-12 北京安云世纪科技有限公司 For to video data into Mobile state synthetic method, system and mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905730A (en) * 2014-03-24 2014-07-02 深圳市中兴移动通信有限公司 Shooting method of mobile terminal and mobile terminal
CN104079833A (en) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 Method and device for shooting star orbit videos
CN104104873A (en) * 2014-07-16 2014-10-15 深圳市中兴移动通信有限公司 Orbit shooting method, shooting method of object motion trails and mobile terminal
CN104104798A (en) * 2014-07-23 2014-10-15 深圳市中兴移动通信有限公司 Method for shooting light painting video and mobile terminal
CN104125407A (en) * 2014-08-13 2014-10-29 深圳市中兴移动通信有限公司 Object motion track shooting method and mobile terminal
CN104159033A (en) * 2014-08-21 2014-11-19 深圳市中兴移动通信有限公司 Method and device of optimizing shooting effect
CN104766361A (en) * 2015-04-29 2015-07-08 腾讯科技(深圳)有限公司 Ghosting effect realization method and device
CN107077720A (en) * 2016-12-27 2017-08-18 深圳市大疆创新科技有限公司 Method, device and the equipment of image procossing
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal

Also Published As

Publication number Publication date
CN110913118A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN108124194B (en) Video live broadcast method and device and electronic equipment
CN106303157B (en) Video noise reduction processing method and video noise reduction processing device
US10003768B2 (en) Apparatus and methods for frame interpolation based on spatial considerations
CN112637517B (en) Video processing method and device, electronic equipment and storage medium
CN110913118B (en) Video processing method, device and storage medium
CN106713942B (en) Video processing method and device
KR100902419B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
US20150113582A1 (en) Communication System, Terminal Device, Video Display Method, and Storage Medium
KR20160095058A (en) Handling video frames compromised by camera motion
US9363526B2 (en) Video and image compression based on position of the image generating device
JP2023001324A (en) computer program for video coding
CN110889809A (en) Image processing method and device, electronic device and storage medium
JP2018535572A (en) Camera preview
CN113012082A (en) Image display method, apparatus, device and medium
US20140082208A1 (en) Method and apparatus for multi-user content rendering
CN114445600A (en) Method, device and equipment for displaying special effect prop and storage medium
CN112565603B (en) Image processing method and device and electronic equipment
US20200106821A1 (en) Video processing apparatus, video conference system, and video processing method
CN111034187A (en) Dynamic image generation method and device, movable platform and storage medium
US20080007648A1 (en) Real time scaling and rotating decoded image data
CN115205164A (en) Training method of image processing model, video processing method, device and equipment
US20050286803A1 (en) Image processing apparatus, display device, image processing method, and image processing program
WO2022061723A1 (en) Image processing method, device, terminal, and storage medium
CN111367598A (en) Action instruction processing method and device, electronic equipment and computer-readable storage medium
CN107592450B (en) Media data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40023026

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant