WO2024051394A1 - Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit de programme d'ordinateur - Google Patents

Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit de programme d'ordinateur Download PDF

Info

Publication number
WO2024051394A1
WO2024051394A1 PCT/CN2023/110290 CN2023110290W WO2024051394A1 WO 2024051394 A1 WO2024051394 A1 WO 2024051394A1 CN 2023110290 W CN2023110290 W CN 2023110290W WO 2024051394 A1 WO2024051394 A1 WO 2024051394A1
Authority
WO
WIPO (PCT)
Prior art keywords
transparency
picture
video
color
video frame
Prior art date
Application number
PCT/CN2023/110290
Other languages
English (en)
Chinese (zh)
Inventor
李棚
邓轩颖
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2024051394A1 publication Critical patent/WO2024051394A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • Embodiments of the present application relate to the field of Internet technology, and relate to but are not limited to a video processing method, device, electronic equipment, computer-readable storage media, and computer program products.
  • the above code implementation method is very time-consuming and labor-intensive, and requires writing a large amount of special effects code to implement; and the dynamic picture method usually only supports software decoding, and the animation special effects produced have problems such as unclear edges, etc.
  • the files of animated special effects are also relatively large, which will occupy a lot of memory overhead; the method of playing video does not support transparency, so the played special effects video cannot be completely integrated into the background image of the current screen.
  • Embodiments of the present application provide a video processing method, device, electronic equipment, computer-readable storage medium and computer program product, which can be used at least in the field of animation production and special effects video processing, can simplify the production method of special effects videos, and support the production of special effects videos. Transparency information, so that the special effects in the original special effects video can be restored when the transparency special effects video is played, and the memory usage when the transparency special effects video is produced is reduced.
  • An embodiment of the present application provides a video processing method.
  • the method includes: for each original video frame of the original special effects video, obtain the color channel information of each pixel in the original video frame and the transparency of each pixel. Channel information; based on the color channel information of each pixel in the original video frame, draw a color picture corresponding to the original video frame, and based on the transparency channel information of each pixel in the original video frame, draw Transparency pictures corresponding to the original video frames; performing splicing processing on the color pictures and transparency pictures of each original video frame according to the preset configuration information to obtain at least one spliced picture; Perform video conversion processing to obtain a transparency splicing video; perform special effects video rendering based on the preset configuration information and the transparency splicing video to obtain a transparency special effects video.
  • An embodiment of the present application provides a video processing device.
  • the device includes: an acquisition module configured to acquire, for each original video frame of the original special effects video, the color channel information and the color channel information of each pixel in the original video frame. Transparency channel information of one pixel; picture generation module configured to based on each of the original video frames Based on the color channel information of the pixel points, draw a color picture corresponding to the original video frame, and, based on the transparency channel information of each pixel point in the original video frame, draw a transparency picture corresponding to the original video frame; splicing
  • the processing module is configured to splice the color picture and transparency picture of each original video frame according to the preset configuration information to obtain at least one spliced picture; the video conversion module is configured to splice the at least one spliced picture. Perform video conversion processing to obtain a transparency splicing video; a rendering module configured to perform special effects video rendering based on the preset configuration information and the transparency splicing video to obtain a transparency
  • An embodiment of the present application provides an electronic device, including: a memory for storing executable instructions; and a processor for implementing the above video processing method when executing the executable instructions stored in the memory.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes executable instructions, and the executable instructions are stored in a computer-readable storage medium; wherein, the processor of the electronic device obtains the information from the computer-readable storage medium.
  • the executable instructions are read and executed, the above video processing method is implemented.
  • Embodiments of the present application provide a computer-readable storage medium that stores executable instructions for causing the processor to execute the executable instructions to implement the above video processing method.
  • the embodiments of the present application have the following beneficial effects: for each original video frame in the original special effects video, a color picture corresponding to the original video frame is drawn based on the color channel information of each pixel in the original video frame, and, based on the original The transparency channel information of each pixel in the video frame is used to draw a transparency picture corresponding to the original video frame, and the color picture and the transparency picture are spliced to obtain at least one spliced picture.
  • the generated transparency spliced video also carries the transparency information of the original special effects video, so that in When rendering special effects video, the transparency special effects video can be rendered based on the transparency information, so that the special effects in the original special effects video can be highly restored when the transparency special effects video is played.
  • the transparency special effects video can be rendered based on the transparency information, so that the special effects in the original special effects video can be highly restored when the transparency special effects video is played.
  • it since there is no need to write code to realize the production of special effects, it is extremely convenient. Greatly reduce the production cost of transparency special effects videos and the memory usage during production.
  • Figure 1 is a schematic diagram of the implementation process of using Lottie animation to create dynamic pictures
  • Figure 2 is a schematic diagram of different button animation effects in Lottie animation design
  • Figure 3 is a schematic diagram of animation special effects produced using APNG technology
  • Figure 4 is a schematic diagram comparing the sizes of GIF dynamic images and WebP dynamic images
  • FIG. 5 is an optional architectural schematic diagram of the video processing system provided by the embodiment of the present application.
  • Figure 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 7 is an optional flow diagram of the video processing method provided by the embodiment of the present application.
  • FIG. 8 is another optional flow diagram of the video processing method provided by the embodiment of the present application.
  • FIG. 9 is another optional flow diagram of the video processing method provided by the embodiment of the present application.
  • Figure 10 is a schematic diagram of the light and shadow special effects provided by the embodiment of the present application.
  • Figure 11 is a schematic diagram of the special effects of game gifts provided by the embodiment of the present application.
  • Figure 12 is a schematic diagram of particle special effects provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of flame special effects provided by the embodiment of the present application.
  • Figure 14 is a schematic flow diagram of the overall video processing method provided by the embodiment of the present application.
  • Figure 15 is a schematic diagram of continuous PNG images of game special effects exported by AE software according to the embodiment of the present application.
  • Figure 16 is a schematic diagram of two pictures obtained by separating the four RGBA channel information provided by the embodiment of the present application.
  • Figure 17 is a schematic diagram of spliced pictures provided by the embodiment of the present application.
  • Figure 18 is a schematic diagram of the insertion position of alpha configuration information provided by the embodiment of the present application.
  • Figure 19 is a schematic diagram of the rendering process provided by the embodiment of the present application.
  • Alpha blending technology refers to the overall technical solution for generating transparency special effects videos in the embodiment of this application.
  • Alpha blending effect refers to some case effects produced by using the methods of the embodiments of this application.
  • Light and shadow effects refers to the light and shadow display effects in the display interface (such as the game interface).
  • Particle special effects refers to the dynamic effects of many small objects in the display interface (such as a game interface).
  • Hardware decoding is a method of video decoding that does not use central processing unit (CPU, Central Processing Unit) resources and uses dedicated hardware devices, such as mobile phone hardware devices, such as graphics processing units (GPU). , Graphics Processing Unit) way to decode.
  • CPU Central Processing Unit
  • GPU graphics processing units
  • GPU Graphics Processing Unit
  • Soft decoding Software decoding is a method of video decoding, which refers to the use of CPU decoding.
  • Lottie is an open source cross-platform complete animation effect solution.
  • Designers can use special effects production tools (AE, Adobe After Effects) to design animations, and then use the Bodymovin plug-in provided by Lottie to export the designed animations into JS.
  • the object notation (JSON, JavaScript Object Notation) format can be directly used on different systems without any additional operations.
  • JSON JavaScript Object Notation
  • FIG 1 it is the implementation process of using Lottie animation to create dynamic pictures.
  • the designed animation is exported into a JSON file 103 in JSON format through AE 101 and Bodymovin plug-in 102, and then it can be applied to iOS system, Android system, Web and React Native (an open source cross-platform mobile application development framework).
  • Lottie animation design it is usually used for animation effects of buttons, etc., as shown in Figure 2, position arrow 201, pointing arrow 202, heart shape 203, etc. can all be realized through Lottie animation design.
  • GIF animation refers to the use of specialized animation production tools or the method of photographing objects frame by frame to allow multiple GIF images to quickly and continuously play moving images according to certain rules.
  • various dynamic emoticon packages can be made of GIF animations.
  • the PNG-based bitmap animation format (APNG, Animated Portable Network Graphics) is an animation extension of the Portable Network Graphics (PNG, Portable Network Graphics) format.
  • the first frame of APNG is a standard PNG image, and the remaining animation and frame rate data other than the first frame are placed in the PNG extension data block.
  • APNG is similar to the key frames of a video. The key frames have complete image information, while only the changes between the two key frames are retained. information. Simply put, APNG supports full color and transparency without blurring problems. As shown in Figure 3, it is an animation special effect produced using APNG technology, in which the boundary of the object 301 in the animation special effect is clear and has no edges.
  • the main goal of WebP is to reduce the size of image files while maintaining the same image quality as JPEG (Joint Photographic Experts Group) format, thereby reducing the time and traffic consumption of sending images on the Internet.
  • Figure 4 is a comparison of the sizes of GIF dynamic images and WebP dynamic images. It can be seen that the image size of WebP dynamic images 402 is much smaller than the image size of GIF dynamic images 401.
  • MP4 video is a video composed of frame-by-frame animation.
  • the Lottie animation is a string file, supports soft decoding, and has no particle effects.
  • GIF animation also supports soft decoding, 8-bit color, and alpha transparent background, but the edges around it are very obvious.
  • APNG also supports soft decoding. Although the file size is larger than the alpha video file, it solves the problem of GIF animations with edges.
  • WebP also supports soft decoding, which has larger video files than alpha.
  • MP4 video supports hard decoding but does not support alpha transparent background.
  • the video processing method of the embodiment of the present application can perfectly solve these problems. At the same time, it also has advantages in decoding capabilities, ease of use, special effects restoration, and transparency support.
  • the color channel information of each pixel and the transparency channel information of each pixel in the original video frame are obtained; Draw a color picture corresponding to the original video frame based on the color channel information of each pixel in the original video frame, and draw a color picture corresponding to the original video based on the transparency channel information of each pixel in the original video frame.
  • the transparency picture corresponding to the frame then, according to the preset configuration information, the color picture and transparency picture of each original video frame are spliced to obtain at least one spliced picture; and the video conversion process is performed on at least one spliced picture to obtain Transparency splicing video; finally, special effects video rendering is performed based on the preset configuration information and transparency splicing video, a transparency special effects video is obtained, and the transparency special effects video is displayed.
  • the generated transparency spliced video can carry the transparency information of the original special effects video, thereby performing When rendering special effects video, the transparency special effects video can be rendered based on the transparency information, so that the special effects in the original special effects video can be highly restored when the transparency special effects video is played, which greatly reduces the production cost of the transparency special effects video and the memory usage during production. quantity.
  • the video processing device is an electronic device used to implement the video processing method.
  • the video processing device (ie, electronic device) provided by the embodiment of the present application can be implemented as a terminal or as a server.
  • the video processing device provided by the embodiment of the present application can be implemented as a notebook computer, a tablet computer, a desktop computer, a mobile device (for example, a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable Game equipment), smart robots, smart home appliances, smart vehicle-mounted equipment, and other terminals with video rendering and display functions, video processing functions, and game functions; in another implementation, the video processing device provided by the embodiment of the present application also It can be implemented as a server, where the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or it can provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, content delivery networks (CDN, Content Delivery Network), and big data and artificial intelligence platforms.
  • the terminal and the server can be connected directly or indirectly through wired or wireless communication methods, which are not limited in the embodiments of this application.
  • an independent physical server
  • FIG. 5 is an optional architectural schematic diagram of the video processing system provided by the embodiment of the present application.
  • the embodiment of the present application takes the video processing method applied to special effects video production as an example. That is to say, the transparency of the final generated Special effects videos can be any type of special effects videos.
  • the video processing system 10 at least includes a terminal 100, a network 200 and a server 300.
  • the terminal 100 in the embodiment of the present application is at least installed with a special effects video production application, or, A video playback application is installed on the terminal 100.
  • the video playback application has a special effects video production function module.
  • the server 300 is a server for special effects video production applications, and the server 300 may constitute the video processing device of the embodiment of the present application.
  • the terminal 100 connects to the server 300 through the network 200.
  • the network 200 may be a wide area network or a local area network, or a combination of the two.
  • the terminal 100 When running the special effects video production application, the terminal 100 obtains the original special effects video through the client of the special effects video production application, encapsulates the original special effects video into a video processing request, and sends the video processing request to the server 300 through the network 200 .
  • the server 300 parses the video processing request to obtain the original special effects video, and obtains each original video frame of the original special effects video, and obtains the color channel information and transparency channel information of each pixel in the original video frame; then, for each original Video frame, based on the color channel information of each pixel in the original video frame, draw a color picture corresponding to the original video frame, based on the transparency channel information of each pixel in the original video frame, draw a transparency picture corresponding to the original video frame ; Then, according to the preset configuration information, the color picture and transparency picture of each original video frame are spliced to obtain at least one spliced picture; video conversion processing is performed on at least one spliced picture to obtain a transparency spliced
  • the video processing method can also be implemented by the terminal 100, that is, the terminal serves as the execution subject to obtain each original video frame of the original special effects video, and the color channel information of each pixel of the original video frame. and transparency channel information; for each original video frame, the terminal generates a color picture and transparency picture corresponding to the original video frame based on the color channel information and transparency channel information of each pixel in the original video frame; and, the terminal generates a color picture and a transparency picture corresponding to the original video frame; and, the terminal Preset configuration information, perform splicing processing on the color picture and transparency picture of each original video frame to obtain at least one spliced picture; perform video conversion processing on at least one splicing picture to obtain a transparency splicing video; finally, the terminal performs a splicing process based on The preset configuration information and transparency splicing video are used for special effects video rendering to obtain a transparency special effects video. While the terminal obtains the transparency special effects video, it can also display the transparency special effects
  • the video processing method provided by the embodiment of the present application can also be based on a cloud platform and implemented through cloud technology.
  • the above-mentioned server 300 can be a cloud server. For each original video frame of the original special effects video, obtain the color channel information of each pixel and the transparency channel information of each pixel in the original video frame through the cloud server, or generate a video corresponding to the original video frame through the cloud server.
  • the cloud server performs splicing processing on the color pictures and transparency pictures of each original video frame, or the cloud server performs video conversion processing on at least one spliced picture, or the cloud server performs video conversion processing based on preset Assume configuration information and transparency splicing video for special effects video rendering to obtain transparency special effects video, etc.
  • there may also be cloud storage and the original special effects video may be stored in the cloud storage, or the color channel information and transparency channel information of each pixel in each original video frame may be stored in the cloud.
  • cloud technology refers to a hosting technology that unifies a series of resources such as hardware, software, and networks within a wide area network or local area network to realize data calculation, storage, processing, and sharing.
  • Cloud technology is a general term for network technology, information technology, integration technology, management platform technology, application technology, etc. based on the cloud computing business model. It can form a resource pool and use it on demand, which is flexible and convenient. Cloud computing technology will become an important support.
  • the background services of technical network systems require a large amount of computing and storage resources, such as video websites, picture websites and more portal websites. With the rapid development and application of the Internet industry, in the future each item may have its own identification mark, which needs to be transmitted to the backend system for logical processing. Data at different levels will be processed separately, and all types of industry data need to be powerful. System backing support can only be achieved through cloud computing.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device shown in Figure 6 includes: at least one processor 310, a memory 350, at least one network interface 320 and a user interface 330.
  • the various components in the electronic device are coupled together through bus system 340 .
  • the bus system 340 is used to implement connection communication between these components.
  • the bus system 340 also includes a power bus, a control bus and a status signal bus.
  • the various buses are labeled bus system 340 in FIG. 6 .
  • the processor 310 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware Components, etc., wherein the general processor can be a microprocessor or any conventional processor, etc.
  • DSP Digital Signal Processor
  • User interface 330 includes one or more output devices 331 that enable the presentation of media content, and one or more input devices 332.
  • Memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, etc. Memory 350 optionally includes one or more storage devices physically located remotely from processor 310 . Memory 350 includes volatile memory or non-volatile memory, and may include both volatile and non-volatile memory. Non-volatile memory can be read-only memory (ROM, Read Only Memory), and volatile memory can be random access memory (RAM, Random Access Memory). The memory 350 described in the embodiments of this application is intended to include any suitable type of memory. In some embodiments, the memory 350 is capable of storing data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplarily described below.
  • Operating system 351 including system programs used to process various basic system services and perform hardware-related tasks, such as framework layer, core library layer, driver layer, etc., used to implement various basic services and process hardware-based tasks; network communication Module 352 for reaching other computing devices via one or more (wired or wireless) network interfaces 320, example network interfaces 320 include: Bluetooth, Wireless Compliance (WiFi), and Universal Serial Bus (USB, Universal Serial Bus), etc.; input processing module 353 for detecting one or more user inputs or interactions from one or more input devices 332 and translating the detected inputs or interactions.
  • WiFi Wireless Compliance
  • USB Universal Serial Bus
  • the device provided by the embodiment of the present application can be implemented in software.
  • Figure 6 shows a video processing device 354 stored in the memory 350.
  • the video processing device 354 can be a video processing device in an electronic device.
  • the device which can be software in the form of programs and plug-ins, includes the following software modules: acquisition module 3541, picture generation module 3542, splicing processing module 3543, video conversion module 3544 and rendering module 3545. These modules are logical, so according to The implemented functions can be combined or further divided into any combination. The functions of each module are explained below.
  • the device provided by the embodiment of the present application can be implemented in hardware.
  • the device provided by the embodiment of the present application can be a processor in the form of a hardware decoding processor, which is programmed to execute the present application.
  • the processor in the form of a hardware decoding processor may use one or more Application Specific Integrated Circuits (ASICs), DSPs, or Programmable Logic Devices (PLDs). ), complex programmable logic devices (CPLD, Complex Programmable Logic Device), Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic components.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processing
  • PLDs Programmable Logic Devices
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the video processing method provided by each embodiment of the present application can be executed by an electronic device, where the electronic device can be any terminal with video rendering and display functions, video processing functions, and game functions, or it can also be a server, that is,
  • the video processing methods in various embodiments of the present application can be executed through a terminal, a server, or the terminal interacts with a server.
  • Figure 7 is an optional flow diagram of the video processing method provided by the embodiment of the present application. The following will be described in conjunction with the steps shown in Figure 7. It should be noted that the video processing method in Figure 7 is performed by The server is used as the execution subject as an example to illustrate.
  • Step S701 For each original video frame of the original special effects video, obtain the color channel information of each pixel and the transparency channel information of each pixel in the original video frame.
  • the original special effects video can be a special effects video produced using a special effects production tool.
  • the special effects production tool can be AE software (Adobe After Effects).
  • the original special effects video can be produced and output through the AE software.
  • the original special effects video can include any type of special effects content.
  • the special effects content includes: particle special effects, light and shadow special effects, water wave effects, game gift special effects, flame special effects, etc.
  • the special effects content is content with certain transparency information. That is to say, each pixel in the original video frame includes color channel information and transparency channel information. If the terminal displays the original special effects video, the display effect of the special effects content in the original special effects video will be clear-cut and clutter-free.
  • the original video frame is an RGB image
  • the RGB channel of the original video frame stores color channel information
  • the alpha channel (ie, transparency channel) of the original video frame stores transparency channel information.
  • the original special effects video is not a video that can be directly displayed, because the video decoding and rendering process is to first obtain the video YUV data through hard decoding or soft decoding of an MP4 format video file. Before video rendering, the video YUV data needs to be converted. into RGB format data, and then render the RGB format data to display a frame of video. Since the video data format before rendering is YUV format, the YUV format does not carry alpha information (that is, transparency channel information), so the MP4 video before rendering cannot carry alpha information, resulting in the loss of transparency information of the original special effects video, resulting in rendering The final special effects video cannot be effectively integrated with the background, and the effect will be that the background is blocked by other areas other than the special effects content in the original special effects video.
  • alpha information that is, transparency channel information
  • the original video frames in the original special effects video are all rectangular, and if the special effects content of the original video frame is a circle, after rendering using the above rendering method, the background will be blocked by the entire rectangle due to the loss of transparency information. , instead of just showing a circle.
  • the method of the embodiment of the present application extracts the transparency information and effectively stores it in subsequent steps, which can ensure the accurate rendering of the special effects video based on the transparency information during rendering, and realize that the background image is only blocked by the circular special effects content. , while other places display the effect normally.
  • the special effects production tool after the special effects production tool produces the original special effects video, it can also perform frame segmentation processing on the original special effects video, that is, extract video frames from the original special effects video to obtain at least one original video frame, and output all of them.
  • Raw video frames After obtaining each original video frame, the color channel information and transparency channel information of each pixel in the original video frame can be extracted.
  • the RGBA channel value corresponding to each pixel can be obtained, the RGB channel value in the RGBA channel value is determined as the color channel information, and the alpha information in the A channel is determined as the transparency channel information.
  • the position information of each pixel in the original video frame can also be obtained, and the position information is combined with the color channel information of the pixel. and transparency channel information for mapping and storage.
  • the color channel information and transparency channel information can be assigned to the color picture and transparency channel information respectively based on the mapping relationship between the position information of each pixel and the color channel information and transparency channel information. Transparency for each pixel in the image.
  • Step S702 Based on the color channel information of each pixel in the original video frame, draw the image corresponding to the original video frame. The corresponding color picture is drawn, and based on the transparency channel information of each pixel in the original video frame, a transparency picture corresponding to the original video frame is drawn.
  • the color channel information and transparency channel information of each pixel can be assigned to the color picture and the transparency picture respectively.
  • two canvases with the same size as the original video frame can be provided respectively, and then the color channel The information and transparency channel information are assigned to each pixel in the canvas to obtain a color image and a transparency image.
  • the canvas also called a blank canvas
  • the blank carrier includes multiple pixels, and on the blank carrier, each pixel is It does not contain any pixel information, that is, each pixel does not render any color channel information and transparency channel information.
  • the color channel information and transparency channel information can be assigned to the two canvases corresponding to each pixel in the original video frame.
  • Each pixel in each canvas carries color channel information or transparency channel information, thereby obtaining a color picture and transparency picture with image content.
  • the canvas may be a virtual canvas. That is to say, after obtaining the color channel information and transparency channel information of each pixel, the color channel information and transparency channel information may be assigned to the virtual canvas respectively. For each pixel, a color image and a transparency image are obtained.
  • Color pictures are pictures that carry color channel information
  • transparency pictures are pictures that carry transparency channel information.
  • a color picture and a transparency picture can be generated for each original video frame in the original special effects video.
  • Step S703 perform splicing processing on the color picture and transparency picture of each original video frame according to the preset configuration information to obtain at least one spliced picture.
  • the preset configuration information is information used to characterize the mapping relationship between each pixel in the transparency image and the pixels in the color image. During the splicing process, the color image and transparency can be determined based on the preset configuration information. The positions of the pictures in the canvas corresponding to the spliced picture (i.e., the spliced canvas).
  • the preset configuration information may include the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture.
  • the first position information of the transparency picture in the splicing canvas refers to where the center position of the transparency picture is located on the splicing canvas corresponding to the splicing picture during the splicing process.
  • the size parameters of the transparency image include the width and height of the transparency image. Based on the width and height of the transparency image, combined with the first position information of the transparency image in the splicing canvas, it can be determined that each pixel in the transparency image is in the splicing canvas. The position in the transparency image can be used to determine the positions of the starting pixel and the ending pixel in the transparency image.
  • the offset value of the transparency image refers to the number of pixels in the spliced image that the transparency image is offset relative to the color image in at least one direction.
  • at least one direction includes but is not limited to: the extension direction along the length of the splicing canvas, The extension direction along the width of the splicing canvas, the extension direction along the diagonal line of the splicing canvas, etc.
  • the splicing canvas corresponding to the spliced picture refers to a blank canvas used to add color pictures and transparency pictures to form a spliced picture. It can be understood that the splicing canvas corresponding to the splicing picture refers to a blank canvas with the same size as the splicing picture. That is to say, when performing splicing processing, you can first provide a blank canvas, and then add color pictures and transparency pictures to the blank canvas in sequence, or add color pictures and transparency pictures to the blank canvas at the same time to form a spliced picture. .
  • the relative positional relationship between the color picture and the transparency picture in the spliced picture may not be fixed.
  • the transparency image can be located at any relative position to the color image, and there is no overlapping area between the color image and the transparency image.
  • the transparency image can be located above, below, left, right, upper right, lower right, upper left, lower left, etc. of the color image.
  • the size of the color picture and the transparency picture are the same, but in the stitched picture, the size of the color picture added to the stitched picture and the size of the transparency picture added to the stitched picture may be the same or different. That is to say, at least one of the color image and the transparency image can be scaled at a specific ratio, and the scaled color image or transparency image can be added to the spliced image.
  • the scaling ratio for scaling processing can be determined based on the information in the preset configuration information. That is to say, when the preset configuration information is given, the scaling ratio of the color picture and transparency picture will also be determined. Color images and transparency images can be scaled based on given preset configuration information. At the same time, since the preset configuration information is known, it can be directly determined based on the preset configuration information in the future, how much the color image and the transparency image are respectively scaled during the splicing process.
  • only the transparency image can be scaled, and the color image can not be scaled. That is to say, the color image is added to the blank canvas in a size ratio of 1:1; the transparency image can be appropriately scaled. The proportional reduction process is then added to the blank canvas. In this way, the amount of transparency information can be greatly reduced, and multiple pixels in the color image can be corresponding to one pixel in the reduced transparency image. In the embodiment of the present application, the transparency image can be reduced by 0.5 times.
  • the color pictures and transparency pictures of each original video frame have the same preset configuration information when splicing them. That is to say, all original video frames in the same original special effects video share the same preset configuration information. In this way, in each obtained spliced picture, the color picture and the transparency picture have the same size, and the color picture and the transparency picture are in the same position.
  • Step S704 Perform video conversion processing on at least one spliced picture to obtain a transparency spliced video.
  • the video conversion process is to perform video compression processing on at least one obtained spliced picture in chronological order to obtain a transparency spliced video.
  • Step S705 Perform special effects video rendering based on preset configuration information and transparency splicing video to obtain a transparency special effects video.
  • the channel information of the pixels can be obtained from each video frame of the transparency spliced video (i.e., the spliced picture) based on the preset configuration information, that is, the color channel information can be obtained from the color picture in the spliced picture, and the color channel information can be obtained from the color picture in the spliced picture. Get the transparency channel information from the transparency image.
  • the video processing method provided by the embodiment of the present application for each original video frame in the original special effects video, is based on the color channel information and transparency channel information of each pixel in the original video frame, correspondingly generating the original video
  • the color picture and transparency picture corresponding to the frame are spliced together to obtain at least one spliced picture, and then the at least one spliced picture is subjected to video conversion processing to obtain a transparency spliced video, thereby based on the preset configuration information Perform special effects video rendering with the transparency splicing video to obtain a transparency special effects video.
  • the generated transparency spliced video also contains the original special effects video.
  • Transparency information so that when rendering special effects video, the transparency special effects video can be rendered based on the transparency information, so that the special effects in the original special effects video can be highly restored when the transparency special effects video is played, greatly reducing the production cost of the transparency special effects video and The memory usage during production.
  • the video processing system at least includes a terminal and a server, and a special effects video production application is installed on the terminal, or a video playback application is installed on the terminal, and the video playback application has a special effects video production function module.
  • the special effects video production function module can use the video processing method provided by the embodiment of the present application to generate a transparency special effects video.
  • FIG 8 is another optional flow diagram of the video processing method provided by the embodiment of the present application. As shown in Figure 8, the method includes the following steps:
  • Step S801 The terminal produces the original special effects video using a special effects production tool.
  • the user can produce the original special effects video through the special effects production tool in the special effects video production function module.
  • the terminal can receive the special effects input by the user through the special effects video production function module of the video playback application.
  • the user when the user inputs the special effects production operation, he or she can input special effect parameters such as the type of special effects, the content of the special effects, the size of the special effects, the delay time of the special effects, the color of the special effects, and the distribution of the special effects.
  • the special effects production tool produces the original special effects video based on the input special effects parameters.
  • Step S802 The terminal exports each original video frame in the original special effects video through the special effects production tool, and encapsulates each original video frame into a video processing request.
  • the original special effects video can be divided into frames or video frame recognition can be performed to obtain each original video frame in the original special effects video and output each original video frame.
  • the original video frames are stored in a specific storage location of the video playback application.
  • each original video frame can also be encapsulated into a video processing request.
  • the video processing request is used to request processing of each original video frame in the original special effects video to obtain Render transparency effects video with special effects.
  • Step S803 The terminal sends the video processing request to the server.
  • Step S804 The server collects the color channel information and transparency channel information of each pixel in each original video frame.
  • the server parses the pixel value of each pixel in each original video frame to extract the color channel information and transparency channel information of each pixel in the original video frame.
  • the original video frames exported by the special effects production tool can be PNG format pictures.
  • the RGBA channel value corresponding to each pixel in the PNG format picture can be obtained, and the RGB channel value in the RGBA channel value can be obtained. It is determined as color channel information, and the color channel information includes R channel value, G channel value and B channel value; the alpha information in the A channel is determined as transparency channel information.
  • Step S805 For each original video frame, the server adds the color channel information of each pixel in the original video frame to the color channel of the first canvas picture to obtain a color picture corresponding to the original video frame.
  • the first canvas picture is a blank canvas to which no pixel value information of any pixel points is added. That is to say, the first canvas picture does not include any pixel point information.
  • the first canvas picture has the same dimensions as the original video frame.
  • the color channel information of each pixel can be added to the color channel in turn according to the position of the pixel in the original video frame. From the color channel of the corresponding position of the first canvas picture, a color picture containing the color channel information of the original video frame is obtained. Among them, the size of the color picture is the same as the size of the original video frame.
  • color channel information may include RGB channel values.
  • the RGB channel value of each pixel in the original video frame can be added to the RGB channel of the first canvas picture to obtain a color picture corresponding to the original video frame.
  • Step S806 For each original video frame, the server converts the transparency channel of each pixel in the original video into The information is added to the color channel of the second canvas image to obtain a transparency image corresponding to the original video frame.
  • the second canvas picture is also a blank canvas to which no pixel value information of any pixel points is added. That is to say, the second canvas picture does not include any pixel point information.
  • the second canvas picture also has the same dimensions as the original video frame.
  • the transparency channel information of each pixel can be added to the color channel in turn according to the position of the pixel in the original video frame. From the color channel of the corresponding position of the second canvas picture, a transparency picture containing transparency channel information of the original video frame is obtained. Among them, the size of the transparency image is also the same as the size of the original video frame.
  • transparency channel information may include transparency values.
  • the transparency value of each pixel in the original video frame can be added to the RGB channel of the second canvas image to obtain a color image corresponding to the original video frame.
  • the transparency value can range from 0 to 255, where 0 means completely transparent and 255 means opaque. The higher the transparency value, the greater the opacity.
  • Step S807 The server determines the starting position of the transparency picture in the splicing canvas and the scaling ratio of the transparency picture based on at least the first position information of the transparency picture in the splicing canvas, the size parameter of the transparency picture, and the offset value of the transparency picture.
  • the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture constitute the above-mentioned preset configuration information.
  • the first position information of the transparency picture in the splicing canvas is used to represent where the center position of the transparency picture is located on the splicing canvas corresponding to the splicing picture during the splicing process.
  • the size parameters of the transparency image include the width and height of the transparency image. Based on the width and height of the transparency image, combined with the first position information of the transparency image in the splicing canvas, it can be determined that each pixel in the transparency image is in the splicing canvas.
  • the position in the transparency image can be used to determine the positions of the starting pixel and the ending pixel in the transparency image.
  • the offset value of the transparency image refers to the number of pixels in the spliced image that the transparency image is offset relative to the color image in at least one direction.
  • at least one direction includes but is not limited to: the extension direction along the length of the splicing canvas, The extension direction along the width of the splicing canvas, the extension direction along the diagonal line of the splicing canvas, etc.
  • the starting position of the transparency picture in the splicing canvas can be determined based on the first position information of the transparency picture in the splicing canvas and the size parameters of the transparency picture.
  • the size parameters of the transparency picture include the width and height of the transparency picture. Therefore, based on the width, height of the transparency image, and the position of the center of the transparency image in the splicing canvas corresponding to the splicing image, the starting pixel (i.e., the first pixel) and the ending pixel in the transparency image can be determined (i.e. the location of the last pixel). In other words, at least the starting position (that is, the position of the starting pixel) and the ending position of the transparency image in the splicing canvas can be determined. What needs to be noted here is that since the size parameters of the transparency image are known, it is usually not necessary to consider the end position. That is to say, it is only necessary to determine the starting position of the transparency image in the splicing canvas.
  • the size parameters of the transparency picture include the width and height of the transparency picture. Therefore, based on the width and height of the transparency image, and the position of the center of the transparency image in the splicing canvas corresponding to the splicing image, it is possible to determine where each pixel on the boundary line of the transparency image (i.e., the frame of the image) is located. Location. In this way, the position of the pixel point on any boundary line in the bounding box can be determined as the starting position of the transparency picture in the splicing canvas, so that using the boundary line as the starting point, each pixel in the transparency picture can be determined. The position of a pixel in the splicing canvas.
  • the scaling ratio of the transparency picture may also be determined based on the size parameter of the transparency picture and the offset value of the transparency picture.
  • the offset value of the transparency image means that in the spliced image, the transparency image is at least The number of pixels offset in one direction, that is, based on the offset value of the transparency image, the pixel in the transparency image corresponding to any pixel in the color image can be determined, and between the two The number of pixels to offset.
  • the transparency image can be reduced by 0.5 times.
  • the size of the transparency image and the color image are in a 1:1 ratio. Therefore, at the (0, 0) position in the color image
  • the pixel corresponds to the pixel at the (0, 0) position in the transparency image.
  • the position of the pixels in the transparency image is shifted. Therefore, the pixels at the (2, 2) position in the color image are different from the (1) position in the transparency image.
  • 1) The pixels at the position correspond to each other.
  • the pixels at the (4, 4) position in the color picture correspond to the pixels at the (2, 2) position in the transparency picture. It can be seen that the number of pixel points offset by the transparency image relative to the color image in the length direction and width direction of the transparency image is both 2.
  • Step S808 The server obtains the second position information of the color picture in the splicing canvas.
  • the second position information refers to where the center position of the color picture is located on the splicing canvas corresponding to the spliced picture during the splicing process.
  • Step S809 The server adds the color picture and transparency picture of each original video frame to the splicing canvas according to the second position information, the starting position and the scaling ratio to form at least one splicing picture.
  • step S809 can be implemented in the following manner: for each original video frame, a color picture can be added to the position corresponding to the second position information in the splicing canvas to obtain a splicing canvas with a color picture. That is to say, taking the center position of the color picture on the splicing canvas as the starting point for expansion, the color picture is expanded from the center position to pixel points, thereby adding each pixel in the color picture to the splicing canvas to form a Mosaic canvas of color pictures.
  • the transparency image can also be scaled according to the scaling ratio to obtain a scaled transparency image. For example, you can reduce the transparency image to half its size using a reduction ratio of 0.5.
  • the starting position is used as the starting position to add the scaled transparency image, and the scaled transparency image is added to the splicing canvas with the color image to obtain a splicing image. That is to say, according to the position information of the starting position, each pixel of the scaled transparency picture is added to the splicing canvas with the color picture.
  • step S809 can also be implemented in the following manner: first add the transparency image to the splicing canvas, that is to say, first scale the transparency image according to the scaling ratio to obtain the scaled transparency image, and then The starting position is the starting position of the scaled transparency image, and the scaled transparency image is added to the splicing canvas; then, the color image is added to the splicing canvas with the scaled transparency image corresponding to the second position information. position to get a stitched picture.
  • Step S810 The server performs video conversion processing on at least one spliced picture to obtain a transparency spliced video.
  • the video conversion process refers to performing video compression processing on at least one obtained spliced picture in chronological order to obtain a transparency spliced video.
  • Step S811 The server performs special effects video rendering based on the preset configuration information and transparency splicing video to obtain a transparency special effects video.
  • Step S812 The server sends the transparency special effect video to the terminal.
  • Step S813 The terminal displays the transparency special effect video on the current interface.
  • the video processing method provided by the embodiment of the present application determines the starting position of the transparency picture in the splicing canvas and the transparency picture based on the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture.
  • the scaling ratio so that according to the second position information, starting position and scaling ratio of the color picture in the splicing canvas, the color picture and transparency picture of each original video frame can be added to the splicing canvas to realize the color picture and transparency Accurate stitching of pictures to form at least one stitched picture.
  • the spliced picture is also a picture that carries the color channel information and transparency channel information of each pixel in the original video frame.
  • the transparency splicing video corresponds to a video compression encoding format
  • the video compression encoding format includes but is not limited to MP4 video format.
  • the video processing method in the embodiment of the present application may also include the following methods: First, the server obtains the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture. Then, the server performs data format conversion on the first position information of the transparency image in the splicing canvas, the size parameters of the transparency image, and the offset value of the transparency image according to the preset data storage format, and obtains the configuration information string. Finally, the server inserts the configuration information string into the preset position of the data header of the video compression encoding format.
  • the data format conversion may be to convert the first position information of the transparency image in the splicing canvas, the size parameters of the transparency image, and the offset value of the transparency image into a JSON string to obtain information in JSON string format
  • the information in JSON string format is the configuration information string.
  • the information in JSON string format can be inserted into the preset position of the data header in the MP4 video format. For example, it can be inserted into the second box position of the data header.
  • the above video processing method can also be implemented by a terminal. That is to say, the terminal serves as the execution subject.
  • the video frame extraction is performed on the original special effects video to obtain at least one original video. frame, and perform information extraction on each original video frame to obtain the color channel information and transparency channel information of each pixel in each original video frame.
  • a color picture and a transparency picture corresponding to the original video frame are generated; according to the preset configuration information, each The color pictures and transparency pictures of the original video frames are spliced to obtain at least one spliced picture; the at least one spliced picture is subjected to video conversion processing to obtain a transparency spliced video; finally, a special effects video is made based on the preset configuration information and the transparency spliced video. Render to obtain the transparency special effects video. When the transparency special effects video is obtained, the transparency special effects video is displayed on the current interface.
  • the terminal obtains the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture and other preset configuration information, and then stores the transparency picture in the splicing canvas according to the preset data storage format.
  • the first position information, the size parameter of the transparency picture and the offset value of the transparency picture are converted into data formats to obtain a configuration information string, and the configuration information string is inserted into a preset position of the data header of the video compression encoding format.
  • the configuration information string is inserted into the preset position of the data header of the video compression encoding format through the above method, that is to say, the preset configuration information is pre-stored to the preset position of the data header of the video compression encoding format, Therefore, the preset configuration information can be directly obtained from the preset position of the data header of the video compression encoding format.
  • the following steps can also be implemented: Obtaining the configuration information string from the preset position of the data header ; And, perform string parsing on the configuration information string to obtain the preset configuration information.
  • FIG 9 is another optional flow diagram of the video processing method provided by the embodiment of the present application. As shown in Figure 9, the method includes the following steps:
  • Step S901 The terminal produces the original special effects video using a special effects production tool.
  • Step S902 The terminal exports each original video frame in the original special effects video through the special effects production tool, and encapsulates each original video frame into a video processing request.
  • Step S903 The terminal sends the video processing request to the server.
  • Step S904 The server collects the color channel information and transparency channel information of each pixel in each original video frame.
  • steps S901 to S904 are the same as the above-mentioned steps S801 to S804, and will not be described again in the embodiment of the present application.
  • Step S905 For each original video frame, the server determines the color of each pixel in the original video frame. Color channel information and transparency channel information are used to generate color pictures and transparency pictures corresponding to the original video frames.
  • the color channel information and transparency channel information of each pixel can be assigned to the color picture and the transparency picture respectively.
  • two canvases with the same size as the original video frame can be provided, and then the color The channel information and transparency channel information are assigned to each pixel in the canvas to obtain a color image and a transparency image.
  • Color pictures are pictures that carry color channel information
  • transparency pictures are pictures that carry transparency channel information.
  • Step S906 The server performs splicing processing on the color picture and transparency picture of each original video frame according to the preset configuration information to obtain at least one spliced picture.
  • the preset configuration information is information used to characterize the mapping relationship between each pixel in the transparency image and the pixels in the color image. During the splicing process, based on the preset configuration information, the positions of the color image and the transparency image in the splicing canvas corresponding to the spliced image can be determined.
  • Step S907 The server performs video conversion processing on at least one spliced picture to obtain a transparency spliced video.
  • Step S908 The server performs video decoding on the transparency spliced video to obtain decoded format data of each spliced picture in the transparency spliced video.
  • the decoded format data of the transparency spliced video can be obtained through hardware decoding or software decoding (for example, using a video player hardware decoding or software decoding), where the decoded format data can be video frame YUV data.
  • Step S909 The server performs data format conversion on the decoded format data of each spliced picture to obtain color format data of each spliced picture; wherein the color format data includes a color channel data part and a transparency channel data part.
  • the YUV data of the video frame corresponding to each spliced picture can be converted into color format data in RGB data format through a conversion formula. Since the spliced picture is formed by splicing a color picture and a transparency picture, and the color picture carries color channel information, and the transparency picture carries transparency channel information, therefore, the color format of the RGB data format after video decoding and data format conversion
  • the data includes the color channel data part and the transparency channel data part. Among them, the color channel data part corresponds to the color channel information carried in the color picture, and the transparency channel data part corresponds to the transparency channel information carried in the transparency picture.
  • Step S910 The server performs special effects video rendering based on the preset configuration information and the color channel data part and the transparency channel data part of the spliced image to obtain a transparency special effect video.
  • step S910 can be implemented in the following manner: first, obtain the information in JSON string format from a preset position of the data header in the MP4 video format, that is, obtain the preset position of the data header stored in the MP4 video format. Default configuration information for the location. Then, based on the preset configuration information, a mapping relationship between each color channel sub-data in the color channel data part and each transparency channel sub-data in the transparency channel data part is determined.
  • the color channel sub-data is the data corresponding to each pixel in the color channel data part
  • the transparency channel sub-data is the data corresponding to each pixel in the transparency channel data part.
  • the color channel sub-data and transparency channel sub-data that have a mapping relationship are determined to be channel data of the same pixel. That is to say, the color channel sub-data and transparency channel sub-data that represent the same pixel information are associated with a mapping relationship to jointly represent the channel data of the same pixel. Furthermore, the position information of the color channel sub-data in the channel data of each pixel point in the color picture is determined as the position information of the pixel point. Since when splicing color pictures and transparency pictures, the transparency pictures are scaled, but the color pictures are not scaled. Therefore, each pixel in the color picture can correspond to the original video frame one-to-one, so The position information of each pixel in the color picture corresponds to the position information of the pixel in the original video frame.
  • the position information of the color channel sub-data in the color picture will also correspond to the position information of the pixels in the original video frame. Therefore, the position information of the color channel sub-data in the color picture can be Position information is determined as the position information of pixels.
  • special effects video is performed on the channel data of each pixel. Render to get transparency effects video.
  • the color channel sub-data in the channel data of the pixel can be used as RGB data according to the position information of each pixel.
  • the transparency channel sub-data in the channel data of the pixel is used as alpha information to implement special effects video rendering and obtain a transparency special effects video.
  • Step S911 the server sends the transparency special effect video to the terminal.
  • Step S912 The terminal renders the background video of the currently running application.
  • the background video can be any type of video, such as short videos, TV series, game videos, etc.
  • the background video can also be a picture, that is, the picture displayed by the terminal on the current interface.
  • Step S913 The terminal receives a special effect triggering operation.
  • Step S914 In response to the special effect triggering operation, the terminal adds the transparency special effect video to a specific position of the background video.
  • the transparency special effects video is added to a specific position of the background video, which may be to display the transparency special effects video on top of the background video.
  • the user can specify a specific location, and the terminal can also perform video analysis on the background video to intelligently determine the specific location for overlaying the transparency special effect video.
  • a special effects video production application is installed on the terminal.
  • the special effects video production application is used to produce and generate a transparency special effects video.
  • the special effects video production application server can input a special effects video production request on the client side of the special effects video production application.
  • the server of the special effects video production application can respond to the special effects video production request and use the video processing method provided by the embodiment of the present application to generate transparency carrying transparency channel information. Splice videos and further generate transparency effects videos.
  • the transparency special effects video can be sent to the client of the terminal as a response result of the special effects video production request, and the transparency special effects video can be displayed on the client; the transparency special effects video can also be used as a special effects video production application to produce products.
  • Output value in other video applications yes other video applications can directly apply the transparency special effects video generated by the special effects video production application.
  • a video playback application is installed on the terminal.
  • the video playback application has a special effects video production function module.
  • the special effects video production function module Through the special effects video production function module, the video processing method provided by the embodiment of the present application can be used to generate a transparency special effect video.
  • the special effects video production function module can be operated to trigger the embodiment of the present application.
  • the video processing process generates a transparency special effects video, and while generating the transparency special effects video, the transparency special effects video is superimposed on the currently playing video and displayed simultaneously with the currently playing video.
  • Scenario 3 A game application is installed on the terminal. During the process of running the game application, if the player wants to add special effects to the game application, the player can operate on the game client and select the special effects button, and then the video of the embodiment of this application can be used.
  • the processing method generates a transparency special effects video, and superimposes the transparency special effects video onto the game screen on the game interface, and plays it simultaneously with the game screen.
  • the server when the user inputs a special effect triggering operation through the client, he or she can also input the playback position of the transparency special effects video.
  • the server After the server generates the transparency special effects video, it feeds back the transparency special effects video to the terminal, and the client of the terminal plays it at the Insert a video with transparency effects at the position, or play a video with transparency effects at this playback position.
  • the server when the user inputs the special effect triggering operation through the client, he or she can also input the display quantity of the special effect content.
  • the server when the server generates the transparency special effects video, it can generate a corresponding number of special effects according to the display quantity of the special effect content.
  • the transparency effect video of the content and plays the transparency effect video.
  • the server can generate a transparency special effects video containing a special effect content, and after generating the transparency special effects video, copy the transparency special effects video to obtain the same number of transparent effects as the displayed number. degree special effect videos and play these number of transparency special effect videos simultaneously.
  • the special effects content refers to the objects in the transparency special effects video.
  • the special effects content can be a pair of shoes, a bouquet of flowers, a book and other objects in the displayed special effects.
  • the server when a user inputs a special effect triggering operation for multiple types of special effect content through the client, the server can respond to the user's special effect triggering operation and simultaneously generate transparency special effect videos of multiple types of special effect content, and will generate Multiple transparency special effects videos are fed back to the client, and multiple transparency special effects videos are played sequentially or multiple transparency special effects videos are played simultaneously on the client. If the playing positions of multiple transparency special effect videos are the same, multiple transparency special effect videos can be played in sequence. If the playing positions of multiple transparency special effect videos are different, multiple transparency special effect videos can be played at the same time.
  • the video processing method provided by the embodiment of the present application can at least realize the production and display of the light and shadow special effects 1001 in Figure 10, the production and display of the game gift special effects (with alpha transparent background) 1102 of the game gift 1101 in Figure 11,
  • the particle special effects 1201 in Figure 12 are produced and displayed, and the flame special effects 1301 in Figure 13 are produced and displayed.
  • the rendering of the game is actually the rendering of each pixel, and each pixel is divided into four channels: R, G, B, and alpha.
  • RGB is the color channel
  • alpha is the transparency channel of the pixel.
  • the entire game special effects rendering, various cool animations, such as light and shadow effects, etc., are all inseparable from the use of alpha blending.
  • MP4 video files obtain video YUV data through hard decoding or soft decoding. Before rendering, the YUV data needs to be converted into RGB data format, and then the RGB data is rendered to display a frame of video.
  • the embodiment of the present application is based on video technology. Since the video data format is YUV, the MP4 video file cannot carry alpha transparency information. Therefore, the embodiment of the present application opens up an additional space specifically for storing alpha transparency information, so the picture needs to be When the video is spliced into upper and lower parts, when rendering, go to the corresponding position and take the corresponding alpha transparency value to complete the alpha blended rendering.
  • Figure 14 is a schematic diagram of the overall process of the video processing method provided by the embodiment of the present application. As shown in Figure 14, the process of the entire solution is divided into two parts: alpha video production process 1401 and alpha usage process 1402, where the alpha video production process is the process from game special effects to generating alpha video, and the alpha usage process is the process from alpha video decoding to rendering.
  • each PNG image includes alpha transparency information.
  • Alpha transparency information is the information stored in the A channel in the RGBA of the image, indicating the transparency of the image. 0 means completely transparent, and 255 means opaque.
  • each PNG image into two images.
  • Alpha information images that are both equal to the alpha transparency value.
  • there are two separated pictures namely RGB picture 161 and alpha picture 162.
  • the two pictures obtained above that is, one is an RGB picture and the other is an alpha picture with RGB channel values equal to the alpha transparency value
  • spliced up and down (of course, in other embodiments, other methods are used Stitching is also possible, and you can also scale the picture.
  • the spliced picture shown in Figure 17 is a picture formed by scaling the alpha picture 0.5 times) to form a new PNG picture, as shown in Figure 17, RGB picture 171 Located in the upper half of the spliced image, the scaled alpha image 172 is located in the lower half of the spliced image.
  • alpha configuration information i.e., the above-mentioned default configuration information
  • the reason why the PNG image is separated into two images and then spliced together to form one image is because the PNG image is an image with four channel information of RGBA.
  • the PNG image is directly converted into a video, and the player What is obtained after decoding the video is YUV data, and then after the YUV data is converted into RGB data, the alpha information is actually lost. Therefore, splitting the PNG image into two images and then splicing them together to form one image can solve this problem, which is equivalent to opening up a picture area to store alpha information.
  • the spliced multiple spliced pictures are spliced into an MP4 video, and the alpha configuration information is inserted into the MP4 video.
  • the alpha configuration information can be inserted into the second box position of the MP4 data header.
  • Figure 18 it is a schematic diagram of the insertion position of the alpha configuration information provided by the embodiment of the present application.
  • the alpha configuration information can be inserted into the second box position of the MP4 data header.
  • the configuration information is inserted into the moov box 181.
  • the MP4 video format is selected as the carrier.
  • other video formats can also be selected as the carrier, that is, a video in other video formats is formed, and the alpha configuration information is inserted into the specific data header of other video formats. Location.
  • the alpha video (the above-mentioned transparency splicing video) has been obtained. If you use an ordinary video player to play the alpha video file, it can be played normally, but you can only see the spliced video with the upper and lower graphics spliced together, and you cannot see the special effects. Therefore, it is necessary to use alpha blending rendering technology in the unity shader of the game application to render the game special effects normally.
  • rendering each pixel requires RGB data and alpha data.
  • the RGB data comes from the upper part of the video frame.
  • the alpha data comes from the lower half of the video frame, but it needs to accurately correspond to the position between the two data.
  • you need to use the alpha configuration information read from the MP4 video which includes the width and height of the video, margins, starting offset position of the lower half, etc. .
  • the video is rendered into an RGBA four-channel video frame (that is, the video frame of the transparency special effect video) and displayed on the screen.
  • the rendering process shown in Figure 19 is based on the data in the RGB picture 1911 in the spliced picture 191 and the alpha picture.
  • the data in 1912 renders the video frame 192 of the transparency special effect video, and then such video frame 192 also has transparency information.
  • the video processing method provided by the embodiment of the present application can develop game special effects programs more efficiently, and supports particle special effects, transparent background display, etc. At the same time, the generated transparency special effect video file takes up little memory and can be achieved using hardware decoding.
  • the alpha data in the lower half can be further compressed, so various shapes of splicing methods can appear.
  • the video processing method of the embodiment of the present application uses MP4 video as a carrier, makes full use of the MP4 mature technology stack, and uses alpha mixing technology as a solution for game special effects rendering. It is a combination of MP4 and alpha rendering technology.
  • Alpha rendering technology is implemented using a unified shader and is fully independently developed and tested.
  • the alpha data compression mode can optimize the upper and lower splicing modes so that the alpha data only accounts for 1/3 of the data of the entire spliced picture.
  • the content related to user information such as original special effects video, color picture, transparency picture, transparency splicing video, transparency special effects video and other information, if it involves user information or enterprise information,
  • user permission or consent needs to be obtained, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • the video processing device 354 includes:
  • the acquisition module is configured to acquire the color channel information of each pixel and the transparency channel information of each pixel in the original video frame for each original video frame of the original special effects video;
  • the picture generation module is configured to obtain the color channel information of each pixel in the original video frame based on the The color channel information of each pixel in the original video frame is used to draw a color picture corresponding to the original video frame, and based on the transparency channel information of each pixel in the original video frame, a color picture corresponding to the original video is drawn.
  • the transparency picture corresponding to the frame; the splicing processing module is configured to perform splicing processing on the color picture and transparency picture of each original video frame according to the preset configuration information to obtain at least one spliced picture; the video conversion module is configured to perform The at least one spliced picture is subjected to video conversion processing to obtain a transparency spliced video; a rendering module is configured to perform special effects video rendering based on the preset configuration information and the transparency spliced video to obtain a transparency special effects video.
  • the picture generation module is further configured to: add the color channel information of each pixel in the original video frame to the color channel of the first canvas picture to obtain the color channel corresponding to the original video frame.
  • Color picture add the transparency channel information of each pixel in the original video to the color channel of the second canvas picture to obtain a transparency picture corresponding to the original video frame; the first canvas picture and the third canvas picture Both canvas pictures are blank canvases without adding any pixel value information.
  • the color channel information includes RGB channel values
  • the transparency channel information includes transparency values
  • the picture generation module is further configured to: separate the RGB channel values of each pixel in the original video frame into Add to the RGB channel of the first canvas picture to obtain a color picture corresponding to the original video frame; and, add the transparency value of each pixel in the original video frame to the second canvas picture respectively In the RGB channel, a color picture corresponding to the original video frame is obtained.
  • the preset configuration information includes at least one of the following: the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture, and the offset value of the transparency picture; the splicing processing module is also configured is: based on the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture and the offset value of the transparency picture, determine the starting position of the transparency picture in the splicing canvas and the scaling ratio of the transparency picture; obtain the second position information of the color picture in the splicing canvas; According to the second position information, the starting position and the scaling ratio, the color picture and transparency picture of each original video frame are added to the splicing canvas to form the at least one splicing picture.
  • the splicing processing module is further configured to: determine the position of the transparency picture in the splicing canvas based on the first position information of the transparency picture in the splicing canvas and the size parameters of the transparency picture. Starting position; determine the scaling ratio of the transparency picture based on the size parameter of the transparency picture and the offset value of the transparency picture.
  • the splicing processing module is further configured to: for each original video frame, add the color picture to a position corresponding to the second position information in the splicing canvas to obtain a spliced picture with a color Canvas; scale the transparency image according to the scaling ratio to obtain a scaled transparency image; use the starting position as the starting adding position of the scaled transparency image, and add the scaled transparency The picture is added to the splicing canvas with color pictures to obtain a splicing picture.
  • the transparency splicing video corresponds to a video compression encoding format
  • the device further includes: an information acquisition module configured to acquire the first position information of the transparency picture in the splicing canvas, the The size parameter and the offset value of the transparency picture;
  • the format conversion module is configured to, according to the preset data storage format, convert the first position information of the transparency picture in the splicing canvas, the size parameters of the transparency picture and the The offset value of the transparency image is converted into a data format to obtain a configuration information string;
  • the insertion module is configured to insert the configuration information string into a preset position of the data header of the video compression encoding format.
  • the device further includes: a string acquisition module configured to obtain the data from the data header before splicing the color picture and transparency picture of each original video frame according to the preset configuration information. Obtain the configuration information string at the preset position; the string parsing module is configured to perform string parsing on the configuration information string to obtain the preset configuration information.
  • the rendering module is further configured to: perform video decoding on the transparency spliced video to obtain decoded format data of each spliced picture in the transparency spliced video; and perform video decoding on the decoded format data of each spliced picture. Convert the data format to obtain the color format data of each spliced picture; wherein the color format data includes a color channel data part and a transparency channel data part; based on the preset configuration information and the color channel data part of the spliced picture; and The transparency channel data part performs special effects video rendering to obtain the transparency special effects video.
  • the rendering module is further configured to: based on the preset configuration information, determine each color channel sub-data in the color channel data part and each transparency channel sub-data in the transparency channel data part
  • the mapping relationship between the color channel sub-data and the transparency channel sub-data with the mapping relationship is determined as the channel data of the same pixel point;
  • the color channel sub-data in the channel data of each pixel point is
  • the position information in the color picture is determined as the position information of the pixel point; according to the position information of each pixel point, special effects video rendering is performed on the channel data of each pixel point to obtain the transparency special effect video.
  • the device further includes: a background video rendering module configured to render the background video of the currently running application; and a video adding module configured to add the transparency special effect video to the background in response to a special effect triggering operation. specific location of the video.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes executable instructions.
  • the executable instructions are computer instructions; the executable instructions are stored in a computer-readable storage medium.
  • the processor of the electronic device reads the executable instructions from the computer-readable storage medium, the processor
  • the computer executes the executable instruction, the electronic device is caused to execute the method described above in the embodiment of the present application.
  • Embodiments of the present application provide a storage medium in which executable instructions are stored. When the executable instructions are executed by a processor, they will cause the processor to execute the method provided by embodiments of the present application. For example, as shown in Figure 7 shows the method.
  • the storage medium may be a computer-readable storage medium, such as ferroelectric memory (FRAM, Ferromagnetic Random Access Memory), read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read Only Memory), Erasable Programmable Read Only Memory (EPROM, Erasable Programmable Read Only Memory), Electrically Erasable Programmable Read Only Memory (EEPROM, Electrically Erasable Programmable Read Only Memory), flash memory, magnetic surface memory, optical disk, Or memory such as CD-ROM (Compact Disk-Read Only Memory); it can also be various devices including one of the above memories or any combination thereof.
  • FRAM ferroelectric memory
  • ROM Read Only Memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read Only Memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • flash memory magnetic surface memory, optical disk, Or memory such as CD-ROM (Compact Disk-Read Only Memory); it can also be various devices including one of the above memories or any combination thereof
  • executable instructions may take the form of a program, software, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and their May be deployed in any form, including deployed as a stand-alone program or deployed as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • executable instructions may, but do not necessarily correspond to, files in a file system and may be stored as part of a file holding other programs or data, for example, in a Hyper Text Markup Language (HTML) document. in one or more scripts, in a single file that is specific to the program in question, or in multiple collaborative files (e.g., files that store one or more modules, subroutines, or portions of code).
  • executable instructions may be deployed to execute on one electronic device, or on multiple electronic devices located at one location, or on multiple electronic devices distributed across multiple locations and interconnected by a communications network. execute on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)

Abstract

Les modes de réalisation de la présente demande sont au moins appliqués dans les domaines de la production d'animation et du traitement vidéo à effets spéciaux. La demande concerne un procédé et un appareil de traitement vidéo, un dispositif électronique, un support de stockage lisible par ordinateur et un produit programme d'ordinateur. Le procédé consiste à : pour chaque trame vidéo d'origine d'une vidéo à effets spéciaux d'origine, générer une image en couleur et une image en transparence sur la base d'informations de canal de couleur et d'informations de canal de transparence de chaque point de pixel dans la trame vidéo d'origine ; selon des informations de configuration prédéfinies, assembler l'image en couleur et l'image en transparence de chaque trame vidéo d'origine, de façon à obtenir au moins une image assemblée ; effectuer un traitement de conversion vidéo sur l'au moins une image assemblée, de façon à obtenir une vidéo assemblée en transparence ; et effectuer un rendu vidéo à effets spéciaux sur la base des informations de configuration prédéfinies et de la vidéo assemblée en transparence, de façon à obtenir une vidéo à effets spéciaux en transparence et à l'afficher.
PCT/CN2023/110290 2022-09-07 2023-07-31 Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit de programme d'ordinateur WO2024051394A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211090054.2A CN117218269A (zh) 2022-09-07 2022-09-07 视频处理方法、装置、设备及存储介质
CN202211090054.2 2022-09-07

Publications (1)

Publication Number Publication Date
WO2024051394A1 true WO2024051394A1 (fr) 2024-03-14

Family

ID=89033998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/110290 WO2024051394A1 (fr) 2022-09-07 2023-07-31 Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit de programme d'ordinateur

Country Status (2)

Country Link
CN (1) CN117218269A (fr)
WO (1) WO2024051394A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102456A1 (en) * 2009-10-30 2011-05-05 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel
CN107770618A (zh) * 2017-11-02 2018-03-06 腾讯科技(深圳)有限公司 一种图像处理方法、装置及存储介质
CN112070863A (zh) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 动画文件处理方法、装置、计算机可读存储介质和计算机设备
CN113645469A (zh) * 2020-05-11 2021-11-12 腾讯科技(深圳)有限公司 图像处理方法、装置、智能终端及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102456A1 (en) * 2009-10-30 2011-05-05 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel
CN107770618A (zh) * 2017-11-02 2018-03-06 腾讯科技(深圳)有限公司 一种图像处理方法、装置及存储介质
CN112070863A (zh) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 动画文件处理方法、装置、计算机可读存储介质和计算机设备
CN113645469A (zh) * 2020-05-11 2021-11-12 腾讯科技(深圳)有限公司 图像处理方法、装置、智能终端及计算机可读存储介质

Also Published As

Publication number Publication date
CN117218269A (zh) 2023-12-12

Similar Documents

Publication Publication Date Title
CN111193876B (zh) 视频中添加特效的方法及装置
CN111899322B (zh) 视频处理方法、动画渲染sdk和设备及计算机存储介质
CN106611435B (zh) 动画处理方法和装置
CN111899155B (zh) 视频处理方法、装置、计算机设备及存储介质
CN108010112B (zh) 动画处理方法、装置及存储介质
US11507727B2 (en) Font rendering method and apparatus, and computer-readable storage medium
CN113946402B (zh) 基于渲染分离的云手机加速方法、系统、设备及存储介质
TW202004674A (zh) 在3d模型上展示豐富文字的方法、裝置及設備
CN107767437B (zh) 一种多层混合异步渲染方法
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN112307403A (zh) 页面渲染方法、装置、存储介质以及终端
US10460490B2 (en) Method, terminal, and computer storage medium for processing pictures in batches according to preset rules
CN114222185B (zh) 视频播放方法、终端设备及存储介质
CN112954452B (zh) 视频生成方法、装置、终端及存储介质
WO2024051394A1 (fr) Procédé et appareil de traitement vidéo, dispositif électronique, support de stockage lisible par ordinateur, et produit de programme d'ordinateur
CN111526420A (zh) 一种视频渲染方法、电子设备及存储介质
CN115391692A (zh) 视频处理方法和装置
CN115250335A (zh) 视频处理方法、装置、设备及存储介质
CN115131470A (zh) 一种图文素材合成方法、装置、电子设备和存储介质
CN113938572A (zh) 图片传输方法、显示方法、装置、电子设备及存储介质
WO2024131222A1 (fr) Procédé et appareil de traitement d'informations, dispositif électronique, support de stockage lisible par ordinateur, et produit-programme d'ordinateur
US11599338B2 (en) Model loading method and apparatus for head-mounted display device, and head-mounted display device
CN116527983A (zh) 页面显示方法、装置、设备、存储介质及产品
CN116824007A (zh) 动画播放方法、动画生成方法、装置及电子设备
CN118264857A (zh) 一种播放带透明通道视频的方法、装置、电子设备、可读存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862094

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023862094

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023862094

Country of ref document: EP

Effective date: 20240809