CN114827721A - Video special effect processing method and device, storage medium and electronic equipment - Google Patents

Video special effect processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114827721A
CN114827721A CN202210312474.4A CN202210312474A CN114827721A CN 114827721 A CN114827721 A CN 114827721A CN 202210312474 A CN202210312474 A CN 202210312474A CN 114827721 A CN114827721 A CN 114827721A
Authority
CN
China
Prior art keywords
image data
image
rendering
processed
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210312474.4A
Other languages
Chinese (zh)
Inventor
张建荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cut Stone Hi Tech Co ltd
Original Assignee
Beijing Cut Stone Hi Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cut Stone Hi Tech Co ltd filed Critical Beijing Cut Stone Hi Tech Co ltd
Priority to CN202210312474.4A priority Critical patent/CN114827721A/en
Publication of CN114827721A publication Critical patent/CN114827721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The disclosure relates to a video special effect processing method, a video special effect processing device, a storage medium and an electronic device, wherein the method comprises the following steps: pulling an image to be processed; compiling the image to be processed to obtain image data in a preset format, wherein the preset format is a format supported by a browser; determining a target rendering mode according to the application scene of the image to be processed; and according to the preset special effect parameters, the target rendering mode is adopted and the sub-thread is called to render the image data to obtain rendered target image data, so that the rendered target image data is loaded by the browser to be played, and thus, the image data is rendered through the sub-thread, the blockage of other threads caused by long rendering time is avoided, and the problem that video blocking is caused by rendering of a special effect video at a mobile terminal with poor hardware performance is solved.

Description

Video special effect processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of video technologies, and in particular, to a method and an apparatus for processing a video special effect, a storage medium, and an electronic device.
Background
In the related art, with the development of the short video technology, the content of the video is more and more abundant, and in order to improve the attraction of the video, a mode of performing special effect processing on the video is mostly adopted. The Canvas logic and rendering involved in the special effect processing are executed in the main thread, a time-consuming rendering task may block the main thread, and further cause the problem of animation incrustation, the incrustation generated by the animation may affect the user experience, and especially for a mobile terminal with limited hardware performance, some video special effects are very hard to run in a browser/applet of the mobile terminal.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a video special effect processing method, apparatus, storage medium, and electronic device.
According to a first aspect of the embodiments of the present disclosure, there is provided a video special effect processing method applied to a mobile terminal, including:
pulling an image to be processed;
compiling the image to be processed to obtain image data in a preset format, wherein the preset format is a format supported by a browser;
determining a target rendering mode according to the application scene of the image to be processed;
and rendering the image data by adopting the target rendering mode and calling the sub-thread according to a preset special effect parameter to obtain rendered target image data so as to enable the browser to load the rendered target image data for playing.
Optionally, the applying a static applying scene, the target rendering mode includes a transmission rendering mode, and the rendering the image data by using the target rendering mode and invoking a child thread according to a preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing, including:
creating a first sub-thread, creating a first Offscreencanvas object in the first sub-thread, and transmitting a first rendering command to the first sub-thread, wherein the first rendering command carries the special effect parameters and the image data;
rendering the first OffscreenCanvas object through the first sub-thread according to the special effect parameters and the image data to obtain an image bitmap object, and transmitting the image bitmap object to a main thread;
and rendering the image bitmap object to a Canvas element in a document tree corresponding to the image data through the main thread, so that the browser loads the rendered Canvas element to play the target image data.
Optionally, the applying the application scene includes a dynamic application scene, the target rendering mode includes a non-transmission rendering mode, and the rendering the image data by using the target rendering mode and invoking a sub-thread according to a preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing, including:
creating a second OffscreenCanvas object in a Canvas element in a document tree corresponding to the image data;
creating a second sub-thread and transmitting a second rendering command to the second sub-thread, wherein the second rendering command carries the special effect parameter, the image data and the second Offscrencanvas object;
rendering the second Offscreencanvas object according to the special effect parameters and the image data through the third sub-thread to update the Canvas element, so that the browser loads the updated Canvas element to play the target image data.
Optionally, the compiling the image to be processed to obtain image data in a preset format includes:
and compiling the image to be processed by adopting a webassempty technology to obtain image data in a preset format.
Optionally, the compiling the image to be processed by using a webassempty technology to obtain image data in a preset format includes:
creating a third child thread;
and compiling the image to be processed by adopting a webassempty technology through the third sub-thread to obtain image data in a preset format.
Optionally, the image to be processed is an image in a video, and after the rendered target image data is obtained, the method further includes:
acquiring audio corresponding to the image to be processed and a time stamp of the image to be processed;
and synchronously playing the rendered target image data and the audio corresponding to the target image data according to the time stamp of the image to be processed.
Optionally, the format of the video includes ffmpeg format, OpenH264 format, TinyH264 format, or de265 format.
According to a second aspect of the embodiments of the present disclosure, there is provided a video special effects processing apparatus including:
the pulling module is used for pulling the image to be processed;
the compiling module is used for compiling the image to be processed to obtain image data in a preset format, wherein the preset format is a format supported by a browser;
the determining module is used for determining a target rendering mode according to the application scene of the image to be processed;
and the rendering module is used for rendering the image data by adopting the target rendering mode and calling the sub-thread according to a preset special effect parameter to obtain rendered target image data so as to enable the browser to load the rendered target image data for playing.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the video special effects processing method provided by the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the video effects processing provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
pulling an image to be processed; compiling the image to be processed to obtain image data in a format supported by a browser; determining a target rendering mode according to an application scene of an image to be processed; and according to the preset special effect parameter, rendering the image data by adopting a target rendering mode and calling the sub-thread to obtain rendered target image data so as to enable the browser to load the rendered target image data for playing. Therefore, the image data is rendered through the sub-threads, the phenomenon that other threads are blocked due to long rendering time is avoided, and the problem that video is blocked due to the fact that a special effect video is rendered at a mobile terminal with poor hardware performance is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow diagram illustrating a video effects processing method according to an example embodiment.
Fig. 2 is a block diagram illustrating a video special effects processing apparatus according to an example embodiment.
FIG. 3 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a video effect processing method according to an exemplary embodiment, which is used in a terminal, as shown in fig. 1, and includes the following steps.
In step S101, the image to be processed is pulled.
Illustratively, the image to be processed may be an image in a teaching live video. Illustratively, the format of the instructional live video may be ffmpeg format, OpenH264 format, TinyH264, or de265 format.
Illustratively, a WebSocket or API interface or a Fetch API interface may be employed to pull the pending image.
In step S102, the image to be processed is compiled to obtain image data in a preset format, where the preset format is a format supported by the browser.
In some embodiments, the image to be processed may be compiled by: and compiling the image to be processed by adopting a webassempty technology to obtain image data in a preset format. The webassempty technology is a binary byte code compiling technology that can process data into JavaScript, which is a programming language that can be called, so as to release the restriction of the browser on the video format.
In some embodiments, compiling the image to be processed by adopting the webassempty technology to obtain the image data in the preset format may be implemented by the following method: creating a third child thread; and compiling the image to be processed by adopting a webassempty technology through a third sub-thread to obtain image data in a preset format.
Therefore, the image to be processed is compiled through the sub-thread, and the blockage of the main thread is further reduced.
In step S103, a target rendering mode is determined according to an application scene of the image to be processed.
In some embodiments, the application scene may be determined by a video picture in the image to be processed. For example, the application scene may be a static application scene, such as a map application scene, a graphic visualization application scene, and the like, in which the image to be processed is a static image; the application scene may also be a dynamic application scene, for example, an H5 game application scene, in which the image to be processed is a video image. It should be noted that, in different application scenarios, the special effect processing is performed on the image data through the rendering modes of different call child threads, and the specific process may refer to the relevant description of step S104.
In step S104, rendering the image data in a target rendering mode and calling the sub-thread according to the preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing.
Therefore, the image data is rendered through the sub-threads, the blocking of other threads caused by long rendering time is avoided, and the problem that the video is blocked when the special effect video is rendered at the mobile terminal with poor hardware performance is solved.
The special effect parameter may be a parameter for increasing a burr special effect, a parameter for splitting a picture special effect, or a special effect parameter for other effects, and this embodiment is not limited herein.
In some embodiments, the application scene may be a static application scene, and correspondingly, the target rendering mode may be a transmission rendering mode, in which case, step S104 shown in fig. 1 may be implemented by:
creating a first sub-thread, creating a first Offscreencanvas object in the first sub-thread, and transmitting a first rendering command to the first sub-thread, wherein the first rendering command carries special effect parameters and image data; rendering the first Offscreencanvas object through the first sub-thread according to the special effect parameters and the image data to obtain an image bitmap object, and transmitting the image bitmap object to the main thread; and rendering the image bitmap object to a Canvas element in a document tree corresponding to the image data through the main thread, so that the browser loads the rendered Canvas element to play the target image data.
Note that both the OffscreenCanvas and Canvas are objects used for rendering an image. The difference between the two methods is that Canvas can only be used in a window environment, and the Offscreencanvas can be used in the window environment and can also be used in a sub-thread, so that special-effect rendering processing can be carried out on image data in the sub-thread through the created Offscreencanvas, and the phenomenon that other threads are blocked due to long-time rendering, such as UI threads, can be avoided. Wherein the first child thread may be created by the main thread.
The document tree is a tool for describing a document directory structure, and web pages displayed by a browser can be integrated into one document tree, and the document tree includes elements constituting the web pages, such as Canvas elements. By rendering the image bitmap object to the Canvas element in the document tree corresponding to the image data, the Canvas element can be updated, the updated Canvas element can be loaded by the browser, and the special effect newly added in the image to be processed is played and displayed.
The transmission rendering mode is used for directly transmitting the image bitmap object to the main thread, the image bitmap object does not need to be copied, so that the performance is higher, and for a static application scene, only simple image display is needed, so that the transmission rendering mode with higher performance can be considered, and an efficient background rendering and foreground display mode for displaying the target image data is provided.
In some embodiments, the application scene may be a dynamic application scene, and correspondingly, the target rendering mode may be a non-transmission rendering mode, in which case, step S104 shown in fig. 1 may be implemented by:
creating a second OffscreenCanvas object in a Canvas element in a document tree corresponding to the image data; creating a second sub-thread and transmitting a second rendering command to the second sub-thread, wherein the second rendering command carries the special effect parameter, the image data and a second Offscreencanvas object; and rendering the second Offscreencanvas object according to the special effect parameters and the image data through the third sub-thread to update Canvas elements, so that the browser loads the updated Canvas elements to play the target image data.
Similar to the first child thread, a second child thread may also be created by the main thread.
In this embodiment, different from the first OffscreenCanvas object, the second OffscreenCanvas object is created in the Canvas element, so that the Canvas element is directly updated for the second OffscreenCanvas object, and further, the bitmap object of the image does not need to be transmitted to the Canvas element, and a shortest rendering path is provided, so that the overall rendering efficiency can be improved, and the performance requirement of the browser for displaying the image of the dynamic application scene is met.
In some embodiments, the image to be processed is an image in a video, and after obtaining the rendered target image data, the method further includes: acquiring audio corresponding to the image to be processed and a time stamp of the image to be processed; and synchronously playing the rendered image data and the audio corresponding to the image data according to the time stamp of the image to be processed.
When the acquired image to be processed is an image in a video, the image and the audio need to be played synchronously, so that the audio corresponding to the acquired image to be processed and the time stamp of the image to be processed can be acquired, and the image and the audio can be played synchronously.
Fig. 2 is a block diagram illustrating a video effects processing apparatus according to an example embodiment. Referring to fig. 2, the apparatus includes a pull module 201, a compiling module 202, a determining module 203, and a rendering module 204.
A pulling module 201, configured to pull an image to be processed;
a compiling module 202, configured to compile the image to be processed to obtain image data in a preset format, where the preset format is a format supported by a browser;
a determining module 203, configured to determine a target rendering mode according to an application scene of the image to be processed;
and the rendering module 204 is configured to render the image data by adopting the target rendering mode and calling a sub-thread according to a preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing.
Optionally, the application scene includes a static application scene, the target rendering mode includes a transmission rendering mode, and the rendering module 204 includes:
the first creating submodule is used for creating a first sub-thread, creating a first OffscreenCanvas object in the first sub-thread, and transmitting a first rendering command to the first sub-thread, wherein the first rendering command carries the special effect parameters and the image data;
the first rendering sub-module is used for rendering the first OffscrenCanvas object through the first sub-thread according to the special effect parameters and the image data to obtain an image bitmap object, and transmitting the image bitmap object to the main thread;
and the second rendering submodule is used for rendering the image bitmap object to a Canvas element in a document tree corresponding to the image data through the main thread so that the rendered Canvas element is loaded by the browser to play the target image data.
Optionally, the application scene includes a dynamic application scene, the target rendering mode includes a non-transmission rendering mode, and the rendering module 204 includes, according to a preset special effect parameter:
a second creating sub-module, configured to create a second OffscreenCanvas object in a Canvas element in a document tree corresponding to the image data;
creating a second sub-thread and transmitting a second rendering command to the second sub-thread, wherein the second rendering command carries the special effect parameter, the image data and the second Offscrencanvas object;
and the third rendering submodule is used for rendering the second OffscreenCanvas object according to the special effect parameter and the image data through the third sub-thread so as to update the Canvas element, so that the browser loads the updated Canvas element to play the target image data.
Optionally, the compiling module 202 is specifically configured to compile the image to be processed by using a webassempty technology, so as to obtain image data in a preset format.
Optionally, the compiling module 202 includes:
a third creating submodule for creating a third child thread;
and the compiling submodule is used for compiling the image to be processed by adopting a webassempty technology through the third sub-thread to obtain image data in a preset format.
Optionally, the image to be processed is an image in a video, and after obtaining rendered target image data, the apparatus 200 further includes:
the acquisition module is used for acquiring the audio corresponding to the image to be processed and the timestamp of the image to be processed;
and the synchronization module is used for synchronously playing the rendered image data and the audio corresponding to the image data according to the time stamp of the image to be processed.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the video special effects processing method provided by the present disclosure.
Fig. 3 is a block diagram illustrating an electronic device 300 in accordance with an example embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 3, electronic device 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 312, a sensor component 314, and a communication component 316.
The processing component 302 generally controls overall operation of the electronic device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the video effects processing method described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the electronic device 300. Examples of such data include instructions for any application or method operating on the electronic device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 306 provide power to the various components of the electronic device 300. Power components 306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 300.
The multimedia component 308 comprises a screen providing an output interface between the electronic device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 300 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, the audio component 310 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for electronic device 300. For example, sensor assembly 314 may detect an open/closed state of electronic device 300, the relative positioning of components, such as a display and keypad of electronic device 300, sensor assembly 314 may also detect a change in the position of electronic device 300 or a component of electronic device 300, the presence or absence of user contact with electronic device 300, the orientation or acceleration/deceleration of electronic device 300, and a change in the temperature of electronic device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the electronic device 300 and other devices. The electronic device 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described video effect processing methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 304, that are executable by the processor 320 of the electronic device 300 to perform the video effects processing method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video special effect processing method is applied to a mobile terminal and comprises the following steps:
pulling an image to be processed;
compiling the image to be processed to obtain image data in a preset format, wherein the preset format is a format supported by a browser;
determining a target rendering mode according to the application scene of the image to be processed;
and rendering the image data by adopting the target rendering mode and calling the sub-thread according to a preset special effect parameter to obtain rendered target image data so as to enable the browser to load the rendered target image data for playing.
2. The method according to claim 1, wherein the application scene comprises a static application scene, the target rendering mode comprises a transmission rendering mode, and the rendering of the image data by using the target rendering mode and invoking a child thread according to a preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing, comprises:
creating a first sub-thread, creating a first Offscreencanvas object in the first sub-thread, and transmitting a first rendering command to the first sub-thread, wherein the first rendering command carries the special effect parameters and the image data;
rendering the first OffscreenCanvas object through the first sub-thread according to the special effect parameters and the image data to obtain an image bitmap object, and transmitting the image bitmap object to a main thread;
and rendering the image bitmap object to a Canvas element in a document tree corresponding to the image data through the main thread, so that the browser loads the rendered Canvas element to play the target image data.
3. The method according to claim 1, wherein the application scene comprises a dynamic application scene, the target rendering mode comprises a non-transmission rendering mode, and the rendering of the image data by using the target rendering mode and invoking a child thread according to a preset special effect parameter to obtain rendered target image data, so that the browser loads the rendered target image data for playing, comprises:
creating a second OffscreenCanvas object in a Canvas element in a document tree corresponding to the image data;
creating a second sub-thread and transmitting a second rendering command to the second sub-thread, wherein the second rendering command carries the special effect parameter, the image data and the second Offscrencanvas object;
rendering the second Offscreencanvas object according to the special effect parameters and the image data through the third sub-thread to update the Canvas element, so that the browser loads the updated Canvas element to play the target image data.
4. The method according to claim 1, wherein the compiling the image to be processed to obtain image data in a preset format comprises:
and compiling the image to be processed by adopting a webassempty technology to obtain image data in a preset format.
5. The method of claim 4, wherein compiling the image to be processed by using webassempty technology to obtain image data in a preset format comprises:
creating a third child thread;
and compiling the image to be processed by adopting a webassempty technology through the third sub-thread to obtain image data in a preset format.
6. The method according to any one of claims 1-5, wherein the image to be processed is an image in a video, and after obtaining the rendered target image data, the method further comprises:
acquiring audio corresponding to the image to be processed and a time stamp of the image to be processed;
and synchronously playing the rendered image data and the audio corresponding to the image data according to the time stamp of the image to be processed.
7. The method of claim 6, wherein the format of the video comprises ffmpeg format, OpenH264 format, TinyH264, or de265 format.
8. A video special effects processing apparatus, comprising:
the pulling module is used for pulling the image to be processed;
the compiling module is used for compiling the image to be processed to obtain image data in a preset format, wherein the preset format is a format supported by a browser;
the determining module is used for determining a target rendering mode according to the application scene of the image to be processed;
and the rendering module is used for rendering the image data by adopting the target rendering mode and calling the sub-thread according to a preset special effect parameter to obtain rendered target image data so as to enable the browser to load the rendered target image data for playing.
9. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
CN202210312474.4A 2022-03-28 2022-03-28 Video special effect processing method and device, storage medium and electronic equipment Pending CN114827721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210312474.4A CN114827721A (en) 2022-03-28 2022-03-28 Video special effect processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210312474.4A CN114827721A (en) 2022-03-28 2022-03-28 Video special effect processing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114827721A true CN114827721A (en) 2022-07-29

Family

ID=82530199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312474.4A Pending CN114827721A (en) 2022-03-28 2022-03-28 Video special effect processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114827721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474074A (en) * 2022-08-29 2022-12-13 咪咕文化科技有限公司 Video background replacing method and device, computing equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198479A (en) * 2019-05-24 2019-09-03 浪潮软件集团有限公司 A kind of browser audio/video decoding playback method based on webassembly
CN111641838A (en) * 2020-05-13 2020-09-08 深圳市商汤科技有限公司 Browser video playing method and device and computer storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198479A (en) * 2019-05-24 2019-09-03 浪潮软件集团有限公司 A kind of browser audio/video decoding playback method based on webassembly
CN111641838A (en) * 2020-05-13 2020-09-08 深圳市商汤科技有限公司 Browser video playing method and device and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
易旭昕: "OffscreenCanvas概念说明及使用解析", pages 1 - 11, Retrieved from the Internet <URL:https://zhuanlan, zhihu.com/p/34698375/> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115474074A (en) * 2022-08-29 2022-12-13 咪咕文化科技有限公司 Video background replacing method and device, computing equipment and storage medium
CN115474074B (en) * 2022-08-29 2024-05-07 咪咕文化科技有限公司 Video background replacement method, device, computing equipment and storage medium

Similar Documents

Publication Publication Date Title
US20170304735A1 (en) Method and Apparatus for Performing Live Broadcast on Game
CN110231901B (en) Application interface display method and device
EP3147802B1 (en) Method and apparatus for processing information
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
CN105808305B (en) Static resource loading method and device
CN111078170B (en) Display control method, display control device, and computer-readable storage medium
CN111866571B (en) Method and device for editing content on smart television and storage medium
CN106775235B (en) Screen wallpaper display method and device
CN110704059A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109117144B (en) Page processing method, device, terminal and storage medium
US20170308397A1 (en) Method and apparatus for managing task of instant messaging application
CN107566878B (en) Method and device for displaying pictures in live broadcast
CN114827721A (en) Video special effect processing method and device, storage medium and electronic equipment
CN112882784A (en) Application interface display method and device, intelligent equipment and medium
CN108829473B (en) Event response method, device and storage medium
CN117119260A (en) Video control processing method and device
CN107967233B (en) Electronic work display method and device
CN107885464B (en) Data storage method, device and computer readable storage medium
CN115963929A (en) VR display method, device and storage medium
CN112866612B (en) Frame insertion method, device, terminal and computer readable storage medium
CN111246012B (en) Application interface display method and device and storage medium
CN109389547B (en) Image display method and device
CN114125528A (en) Video special effect processing method and device, electronic equipment and storage medium
CN107423060B (en) Animation effect presenting method and device and terminal
CN111538447A (en) Information display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination