CN117830445A - Method, apparatus, device and storage medium for special effect generation - Google Patents

Method, apparatus, device and storage medium for special effect generation Download PDF

Info

Publication number
CN117830445A
CN117830445A CN202410153962.4A CN202410153962A CN117830445A CN 117830445 A CN117830445 A CN 117830445A CN 202410153962 A CN202410153962 A CN 202410153962A CN 117830445 A CN117830445 A CN 117830445A
Authority
CN
China
Prior art keywords
special effect
image
sequence
images
special
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410153962.4A
Other languages
Chinese (zh)
Inventor
陈启超
郑叶欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202410153962.4A priority Critical patent/CN117830445A/en
Publication of CN117830445A publication Critical patent/CN117830445A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

According to embodiments of the present disclosure, methods, apparatuses, devices, and storage medium for special effect generation are provided. In the method, a sequence of effect images is acquired, the sequence of effect images comprising one or more effect images for applying an effect style; performing redundancy detection on the sequence of special effect images to detect at least one special effect image and/or at least one image region in which redundancy exists in the sequence of feature images; performing coding on the image information of the special effect image sequence based on the redundancy of at least one special effect image and/or at least one image area to obtain a coding result corresponding to the special effect image sequence; and generating a special effect package corresponding to the special effect style based at least on the encoding result. Therefore, the memory consumption and the calculated amount can be reduced, the efficiency of generating the special effect package corresponding to the special effect style is improved, and the cost is reduced.

Description

Method, apparatus, device and storage medium for special effect generation
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers and, more particularly, relate to methods, apparatuses, devices, and computer-readable storage media for special effects generation.
Background
An increasing number of applications are currently designed to provide various services to users. For example, a user may publish, browse, view various types of content in an application, including multimedia content such as video, images, image sets, sound, and the like. The user may also use special effects styles on the images in the application. In some applications, the generation of special effect packages corresponding to some special effect styles is performed at a server device, and the server device is limited by power consumption and computing power, so that user experience is affected. Therefore, it is desirable to be able to enhance the user experience.
Disclosure of Invention
In a first aspect of the present disclosure, a method of effect generation is provided. The method comprises the following steps: acquiring a special effect image sequence, wherein the special effect image sequence comprises one or more special effect images for applying special effect styles; performing redundancy detection on the sequence of special effect images to detect at least one special effect image and/or at least one image region in which redundancy exists in the sequence of feature images; performing coding on the image information of the special effect image sequence based on the redundancy of at least one special effect image and/or at least one image area to obtain a coding result corresponding to the special effect image sequence; and generating a special effect package corresponding to the special effect style based at least on the encoding result.
In a second aspect of the present disclosure, an apparatus for special effect generation is provided. The device comprises: an acquisition module configured to acquire a sequence of special effects images including one or more special effects images for applying a special effects style; a detection module configured to perform redundancy detection on the sequence of special effect images to detect the presence of redundant at least one special effect image and/or at least one image region in the sequence of feature images; the coding module is configured to perform coding on the image information of the special effect image sequence based on the redundancy of at least one special effect image and/or at least one image area, so as to obtain a coding result corresponding to the special effect image sequence; and a generation module configured to generate a special effect package corresponding to the special effect style based at least on the encoding result.
In a third aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided. The computer readable storage medium has stored thereon a computer program executable by a processor to implement the method of the first aspect.
It should be understood that what is described in this section of the disclosure is not intended to limit key features or essential features of the embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure may be implemented;
FIG. 2 illustrates a schematic diagram of an example architecture of effect generation, according to some embodiments of the present disclosure;
3A-3C illustrate schematic diagrams of the presence of redundant special effects images in a special effects image sequence according to some embodiments of the present disclosure;
FIGS. 4A-4C illustrate schematic diagrams of the presence of redundant special effects images in a special effects image sequence according to further embodiments of the present disclosure;
5A-5D illustrate schematic diagrams of image regions in which redundancy exists in a sequence of effect images according to some embodiments of the present disclosure;
FIG. 6 illustrates a flow chart of a process of special effects interaction according to some embodiments of the present disclosure;
FIG. 7 illustrates a block diagram of an apparatus for special effects generation, according to some embodiments of the present disclosure; and
fig. 8 illustrates a block diagram of an electronic device in which one or more embodiments of the disclosure may be implemented.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
In this context, unless explicitly stated otherwise, performing a step "in response to a" does not mean that the step is performed immediately after "a", but may include one or more intermediate steps.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
As used herein, the term "model" may learn the association between the respective inputs and outputs from training data so that, for a given input, a corresponding output may be generated after training is completed. The generation of the model may be based on machine learning techniques. Deep learning is a machine learning algorithm that processes inputs and provides corresponding outputs through the use of multiple layers of processing units. The "model" may also be referred to herein as a "machine learning model," "machine learning network," or "network," and these terms are used interchangeably herein. A model may in turn comprise different types of processing units or networks.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to relevant legal regulations.
For example, in response to receiving an active request from a user, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require obtaining and using personal information to the user, so that the user may autonomously select whether to provide personal information to software or hardware such as an electronic device, an application, a server, or a storage medium that performs the operation of the technical solution of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the prompt information may be sent to the user, for example, in a popup window, where the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented.
In this example environment 100, an application 120 is installed in a terminal device 110. The user 140 may interact with the application 120 via the terminal device 110 and/or an attachment device of the terminal device 110. In some embodiments, terminal device 110 communicates with server device 130 to enable provisioning of services for application 120.
In the environment 100, the terminal device 105 communicates with the server device 130, and the application 125 is installed in the terminal device 105. The user 135 may interact with the application 125 via the terminal device 105 and/or an attachment device of the terminal device 105.
In some examples, the application 125 may be a special effects editing application capable of providing various types of services related to special effects content editing to the user 135. In some examples, the application 125 includes an editor. The server device 130 may utilize the application 125 to provide services for the application 120 installed in the terminal device 110. For example, the server device 130 may generate a special effect package corresponding to the special effect style by using the application 125, and provide services for the application 120 installed in the terminal device 110.
In the example environment 100, the user 135 may be referred to as a special effects design user, designer. In some embodiments, the application 125 may be deployed locally to the terminal device of the user 135 and/or may be supported by the server device 130. For example, a terminal device of the user 135 may run a client with the application 125 that may support user interaction with the application 125 provided by the server device 130. In the case where the application 125 is running on the server device 130, the server device 130 may implement service provisioning for clients running in the terminal device based on the communication connection with the terminal device. The application 125 can present the corresponding page 145 to the user 135 based on the user's 135 operation to output to the user 135 and/or receive information related to the effect editing from the user 135.
In some embodiments, the application 120 may be a content sharing application (e.g., a video-sharing-based video application) capable of providing various types of services related to media content items to the user 140, including browsing, commenting, forwarding, authoring (e.g., shooting and/or editing), publishing, and so forth of the content. In some embodiments, the application 120 may also be any other suitable application in which media content items can be presented.
In the environment 100 of fig. 1, the terminal device 110 may present the page 150 of the application 120 if the application 120 is in an active state. The pages 150 may include various types of pages that the application 120 can provide, such as content presentation pages, content authoring pages, content publishing pages, message pages, personal homepages, and so forth. The application 120 may provide content viewing functionality to view various types of content published in the application 120. Via the application 120, the user 140 may also use the generated special effects package to author various types of content. User 140 may be referred to as an effect user or effect user.
In some embodiments, the terminal device 105 or 110 may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, electronic book device, gaming device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 105 or 110 is also capable of supporting any type of interface to the user (such as "wearable" circuitry, etc.). The server device 130 may be various types of computing systems/servers capable of providing computing capabilities, including but not limited to mainframes, edge computing nodes, computing devices in a cloud environment, and so forth.
It should be understood that the structure and function of the various elements in environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure.
As briefly mentioned above, during the production of special effect packages, some image materials are often used to achieve visual effects such as animation, interaction, etc. The specific processing process can be divided into two steps of decoding and post-processing, wherein the decoding refers to recovering an image compression file such as png, jpg and the like into a common pixel texture which can be directly read by the hardware of a graphics processor (Graphic Processing Unit, GPU for short) according to a decompression algorithm and storing the common pixel texture on a system memory/video memory. The post-processing is to sample the pixel texture in the GPU and calculate the final pixel value according to the special effect settings for display on the screen. For example, each frame decompresses a picture to display it to a designated position on the screen, and successive frames can form an animation effect in that region.
The current special effect package is mostly applied to mobile devices, and the mobile devices are limited by power consumption and calculation power. For example, excessive power consumption, computation intensive, will cause heating, device down-conversion, frame dropping, etc., and thus, may severely impact the user experience. Correspondingly, in the special effect image material processing, the decoding and post-processing of the special effect image material comprise intensive calculation, so that the reading transmission of the memory has obvious influence on heating, and the stability is reduced when the occupied memory is overlarge, and the problems of easy crash (crash) and the like on some low-end machines are solved.
By reducing the size of the video material, the memory consumption and the amount of computation can be reduced. Conventionally, low resolution material may be used or the edge margin area of the image may be reduced by reducing the size. For example, a texture compression technique (Adaptive scalable texture compression, ASTC for short) may be used to reduce the memory size. However, these techniques belong to a single point technique, requiring a large amount of human involvement in judgment. Although, texture compression techniques may reduce certain memory single essence, and thus process all data. However, the vast inventory and future incremental special effect prop image materials in the current business lack a unified automated processing flow.
According to an embodiment of the present disclosure, an improved special effect generation scheme is presented. According to aspects of embodiments of the present disclosure, a sequence of effect images is obtained that includes one or more effect images for applying an effect style. Redundancy detection is performed on the sequence of special effect images to detect the presence of redundant at least one special effect image and/or at least one image region in the sequence of feature images. And performing coding on the image information of the special effect image sequence based on the redundancy of at least one special effect image and/or at least one image area to obtain a coding result corresponding to the special effect image sequence. Then, a special effect package corresponding to the special effect style is generated based at least on the encoding result. Therefore, the memory consumption and the calculated amount can be reduced, the efficiency of generating the special effect package corresponding to the special effect style is improved, and the cost is reduced.
Some example embodiments of the present disclosure will be described below with continued reference to the accompanying drawings.
Fig. 2 illustrates a schematic diagram of an example architecture 200 of special effects generation, according to some embodiments of the present disclosure. For ease of discussion, these embodiments will be described with reference to environment 100 of FIG. 1.
As shown in fig. 2, a user 135 (e.g., an effect designer) designs an effect image sequence based on an editor 212 in the terminal device 105 and encodes the effect image sequence into an effect package 213. The special effects package 213 may be provided to the server device 130 or the terminal device 110. The server device 130 or the terminal device 110 decodes and renders the sequence of effect images from the effect package 213 based on the rendering engine and the algorithm engine layer 215. Hardware 216 is used to support the editing and encoding of the sequence of effect images, and the decoding and rendering of the sequence of effect images. In some examples, the editor 212 and portions of the hardware layer 216 may be included on the special effects design side, such as in the application 125 of the terminal device 105. Rendering engine and algorithm engine layer 215 and portions of hardware layer 216 may be included on the special effects application side, for example in terminal device 110.
The material optimization module 214 is configured to perform the optimization operations for effect generation set forth in embodiments of the present disclosure. In some embodiments, the material optimization module 214 may be implemented in a variety of suitable devices.
In some embodiments, the materials optimization module 214 may be implemented in the application 125, for example, the materials optimization module 214 may be implemented at the terminal device 105 along with the editor 212, such that optimization and packaging of the designed effects may be implemented in the effects design process. Although shown as separate two modules in fig. 2, in some embodiments, the material optimization module 214 may be integrated in the editor 212 as part of the editor 212.
In some embodiments, the material optimization module 214 may be integrated at the server device 130. In this way, the special effects being designed can be optimized and packaged in the special effects design process of the terminal device 105, and the generated special effects package can also be optimized and repackaged.
In some embodiments, the material optimization module 214 may be implemented in the terminal device 105. In this way, before application of the special effect package, the special effect package may be optimized and repackaged, and then special effect rendering may be performed using the repackaged special effect package.
Hereinafter, for convenience of description, a case where the material optimizing module is integrated in the server device 130 will be described as an example, and thus the material optimizing process will be described in terms of standing on the server device 130.
For ease of understanding, the overall process by which the materials optimization module 214 is configured to perform the optimization operations for effect generation set forth in the embodiments of the present disclosure is first described with reference to fig. 1, 2.
If the material optimization module 214 is triggered or automatically optimized, the material optimization module 214 first performs redundancy detection. If the server device 130 finds that there is at least one special effect image and/or at least one image region (e.g., redundant image material) in the special effect image sequence based on the material optimization module 214, then a redundancy optimization process is performed at the redundancy optimization module 218. For example, the redundancy optimization module 218 includes redundancy area clipping 218-1, redundancy area replacement 218-2, and the like. After the server device 130 performs redundancy optimization on the special effect image sequence based on the material optimization module 214, the size and organization form corresponding to the special effect image sequence will change. Accordingly, in order not to change the prop effect designed by the user, the server device 130 performs corresponding reconfiguration of the rendering parameters based on the rendering parameter reconfiguration module 219. The server device 130 packages the reassembled rendering parameters based on the repackaging module 220 and outputs the modified special effect package.
The scheme for special effect generation according to the embodiment of the present disclosure is described in detail below with continued reference to fig. 1 and 2.
In performing the material optimization, the server-side device 130 obtains a special effects image sequence including one or more special effects images for applying a special effects style. Each effect image is also referred to as an effect frame. In some examples, server device 130 obtains a special effects image sequence including one or more special effects images for applying a special effects style from terminal device 110.
In some embodiments, the server device 130 performs redundancy detection on the sequence of special effects images to detect the presence of redundant at least one special effects image and at least one image region in the sequence of feature images. In some examples, the server device 130 performs redundancy detection on the sequence of special effects images through the material optimization module 214. It is to be appreciated that the material optimization module 214 can be adapted for static effects (including single effect images) and dynamic effects (e.g., effect animations).
In some examples, the server device 130 may perform redundancy detection on the sequence of special effects images by the material optimization module 214 to detect at least one special effects image and at least one image region in the sequence of feature images that are redundant. Embodiments of the present disclosure will be described below with reference to fig. 3A to 3C, fig. 4A to 4C, and fig. 5A to 5D, which illustrate schematic diagrams of redundant special effect images and image areas detected by the present disclosure, but this is merely exemplary, and the present disclosure is not limited thereto.
Fig. 3A-3C illustrate schematic diagrams of the presence of redundant special effects images in a special effects image sequence according to some embodiments of the present disclosure. As shown in fig. 3A to 3C, the server device 130 performs redundancy detection on the special effect image sequence through the material optimization module 214 to detect at least one special effect image in which redundancy exists in the special effect image sequence. For example, the schematic 301 shown in fig. 3A, the schematic 302 shown in fig. 2B, and the schematic 303 shown in fig. 2C are redundant special effect images existing in the special effect image sequence. For example, a black image present in a sequence of effect images.
Fig. 4A-4C illustrate schematic diagrams of the presence of redundant special effects images in a special effects image sequence according to further embodiments of the present disclosure. As shown in fig. 4A to 4C, the server device 130 performs redundancy detection on the special effect image sequence through the material optimization module 214 to detect at least one special effect image in which redundancy exists in the special effect image sequence. For example, the schematic 401 shown in fig. 4A, the schematic 402 shown in fig. 4B, and the schematic 403 shown in fig. 4C are redundant special effect images existing in the special effect image sequence. For example, duplicate images present in a sequence of effect images.
Fig. 5A-5D illustrate schematic diagrams of image regions in which redundancy exists in a special effects image sequence according to some embodiments of the present disclosure. As shown in fig. 5A to 5D, the server device 130 performs redundancy detection on the special effect image sequence through the material optimization module 214 to detect at least one image area in which redundancy exists in the special effect image sequence. For example, the schematic diagram 501 shown in fig. 5A, the schematic diagram 502 shown in fig. 5B, the schematic diagram 503 shown in fig. 5C, and the schematic diagram 504 shown in fig. 5D are redundant image areas existing in the special effects image sequence. For example, the effective picture exists only in the lower half of the sequence of special effect images. The active screen is in interface 510 as in figure 501. In diagram 502, the active screen is in interface 520. In fig. 503, the active screen is in interface 530. In fig. 504, the active screen is in interface 540.
In some embodiments, the server device 130 performs encoding of the image information of the sequence of effect images based on the redundancy of the at least one effect image and/or the at least one image region. Further, a coding result corresponding to the special effect image sequence is determined. The server device 130 generates a special effect package corresponding to the special effect style according to the encoding result.
In some embodiments, the server device 130 configures rendering parameters for each effect image in the sequence of effect images based at least on the encoding result of the sequence of effect images. The server device 130 generates a special effect package corresponding to the special effect style according to the encoding result and the rendering parameters.
In some examples, the corresponding size and organization of the sequence of effect images may change as the server device 130 performs redundancy optimization on the sequence of effect images based on the material optimization module 214. Therefore, in the special effect package corresponding to the special effect style, parameters are reconfigured when the parameters are referenced. For example, the algorithm in the prop package references the pixel in the upper left corner of frame 1, and if the pixel in this position is not found after optimization, the rendering parameters are modified to let the rendering sample from the middle position, and the default value (0, 0) is assigned to the parameter requirement in the upper half position. The rendering parameters (reference locations) are included in the configuration file of the effect package.
For example, the material optimization module 214 may be integrated in the editor 212 of the special effects. In this way, the generated effect package (also referred to as a prop package) can be optimized in real-time during the process of designing an effect by the user 135 (effect designer). In other examples, the material optimization module 214 may be configured between the editor 212 and the rendering/algorithm engine layer 215. After the user 135 (special effects designer) designs the special effects, the special effects are packaged based on the existing scheme, and a corresponding special effects package is obtained. The materials optimization module 214 then performs the optimization on the basis of the existing effect package and provides the optimized effect package for rendering and output by the rendering/algorithm engine layer 215. If it is an already generated special effect package, the material optimization module 214 may also perform batch detection and automatic optimization according to the above optimization steps.
The following describes the server device 130 performing redundancy detection on the special effects image sequence with reference to fig. 2. In some embodiments, the server device 130 detects whether there are image areas having the same color in each special effect image by performing a single color detection on each special effect image in the special effect image sequence. In some examples, the server device 130 performs a single color detection on each of the sequence of special effects images based on a single color detection module 217-1 of the redundancy detection modules 217 in the material optimization module 214.
In some embodiments, the server device 130 may perform monochrome detection of each of the sequence of effect images in the following manner. The server device 130 slides in each special effect image in the special effect image sequence according to the first sliding window of the predetermined area shape, and further determines whether the image blocks defined by the first window have the same color.
In one example, the server device 130 detects the sequence of effect images in the form of a first sliding window of the block area. For example, a first sliding window of the block area is 64 x 64, and it is determined whether there is an image of the same color within the image block defined by the window. By sequentially sliding the first sliding window in the special effect image, the image blocks of the same color detected in the continuous sliding window are determined as the first image area where the single color redundancy exists.
In some embodiments, if a first image region having the same color is detected from a first special effect image in the special effect image sequence, the server device 130 performs encoding of the first image region according to a single pixel value corresponding to the color of the first image region. In some embodiments, the encoding result refers to the extent of the first image region in the first effect image, and also refers to the encoding result of a single pixel value being applied to the first image region.
In some examples, if the same color is within a certain image region, then all pixels of that region may be represented by one pixel value c (r, g, b, a). If the whole special effect image is a single color value, the special effect image can be represented by only one pixel value.
In some examples, if the first image region has an image of the same color, the server device 130 encodes a single pixel value in the first image region and records the range (L, T, R, B) of the single pixel value in the original special effects image. For example, if the first image region has the same color, the server device 130 encodes a single pixel value. The server device 130 then directly references the encoded single pixel value to other pixel values contained in the image having the same color.
In some examples, the server device 130 performs inter-frame repetition detection on each of the sequence of special effects images based on the inter-frame repetition detection module 217-2 in the redundancy detection module 217 in the material optimization module 214.
In some examples, the server device 130 may perform inter-frame repetition detection on each of the sequence of effect images in the following manner. In some embodiments, the server device 130 determines whether respective pixel locations of at least two consecutive special effect images in the special effect image sequence have the same pixel value. Then, the server device 130 determines that redundancy exists in at least two consecutive special effect images according to that the determined corresponding pixel positions of the at least two consecutive special effect images have the same pixel value.
In some embodiments, the server device 130 slides through the sequence of special effects images according to the second sliding window, and further determines whether at least two consecutive special effects images defined in the second sliding window are identical. For example, for a scene such as a sequential frame animation, the server device 130 uses the second sliding window to perform pixel value comparison on the previous and subsequent frames, so as to determine whether at least two consecutive special effect images defined in the second sliding window are identical. The same special effect image detected in the continuous sliding window is determined as a group of special effect images with inter-frame redundancy by sequentially sliding the second sliding window.
In some embodiments, if the server device 130 determines that there is redundancy for at least two consecutive special effect images in the sequence of special effect images, encoding is performed on a single special effect image in the at least two consecutive special effect images. The encoding results may indicate the position of at least two consecutive special effect images in the special effect image sequence and may also indicate that the encoding results of a single special effect image are applied to decode at least two consecutive special effect images.
In some examples, if the server device 130 determines that there is redundancy in at least two consecutive special effects images, only a single special effects image of the at least two consecutive special effects images is retained and frame information of the image is recorded. For example, the server device 130 directly references the encoded single pixel value for other special effect images of the same color for at least two consecutive special effect images defined by the second sliding window corresponding to the encoded single pixel value.
For example, if there are at least two consecutive special effect images in the special effect image sequence, there is redundancy, only a single special effect image in the at least two consecutive special effect images is retained, and when decoding, only frame information corresponding to the single special effect image is decoded.
According to the embodiment of the invention, the memory consumption and the calculated amount can be reduced, so that the efficiency of generating the special effect package corresponding to the special effect style is improved, and the cost is reduced.
Example procedure
Fig. 6 illustrates a flow chart of a process 600 of special effects interactions according to some embodiments of the present disclosure. Process 600 may be implemented at server device 130. Process 600 is described below with reference to fig. 1.
As shown in fig. 6, at block 610, the server device 130 obtains a sequence of effect images including one or more effect images for applying an effect style.
At block 620, the server device 130 performs redundancy detection on the sequence of special effects images to detect the presence of redundant at least one special effect image and/or at least one image region in the sequence of feature images.
In block 630, the server device 130 performs encoding on the image information of the special effect image sequence based on the redundancy of the at least one special effect image and/or the at least one image area, to obtain an encoding result corresponding to the special effect image sequence.
At block 640, the server device 130 generates a special effects package corresponding to the special effects style based at least on the encoding result.
In some embodiments, performing redundancy detection on the sequence of special effects images includes: a single color detection is performed on each of the special effect images in the special effect image sequence to detect whether or not there is an image area having the same color in each of the special effect images.
In some embodiments, performing the single color detection on each of the sequence of effect images includes: a first sliding window in the shape of a predetermined area slides in each of the special effect images in the special effect image sequence, and it is determined whether or not the image blocks defined by the first sliding window have the same color.
In some embodiments, performing encoding on the image information of the sequence of effect images includes: if a first image region having the same color is detected from a first special effect image in the special effect image sequence, encoding the first image region is performed based on a single pixel value corresponding to the color of the first image region, wherein the encoding result indicates a range of the first image region in the first special effect image, and further indicates that the encoding result of the single pixel value is applied to the first image region.
In some embodiments, performing redundancy detection on the sequence of special effects images includes: determining whether respective pixel positions of at least two consecutive special effect images in the special effect image sequence have the same pixel value; and determining that redundancy exists for the at least two consecutive effect images based on determining that respective pixel locations of the at least two consecutive effect images have the same pixel value.
In some embodiments, determining whether at least two consecutive special effects images in the sequence of special effects images are identical comprises: sliding in the sequence of effect images in accordance with the second sliding window to determine whether at least two consecutive effect images defined in the second sliding window are identical.
In some embodiments, performing encoding on the image information of the sequence of effect images includes: if it is determined that redundancy exists for at least two consecutive special effect images, encoding is performed on a single special effect image of the at least two consecutive special effect images, wherein the encoding result indicates a position of the at least two consecutive special effect images in the special effect image sequence and further indicates that the encoding result of the single special effect image is applied for decoding the at least two consecutive special effect images.
In some embodiments, generating the special effects package corresponding to the special effects style based at least on the encoding result includes: configuring rendering parameters for each effect image in the sequence of effect images based at least on the encoding result; and generating a special effect package including at least the encoding result and the rendering parameters.
Example apparatus and apparatus
Fig. 7 illustrates a schematic block diagram of an apparatus 700 for special effects generation, in accordance with certain embodiments of the present disclosure. The apparatus 700 may be implemented as or included in the server device 130. The various modules/components in apparatus 700 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 700 includes an acquisition module 710 configured to acquire a sequence of effect images including one or more effect images for applying an effect style.
The apparatus 700 further comprises a detection module 720 configured to perform redundancy detection on the sequence of effect images to detect the presence of redundant at least one effect image and/or at least one image region in the sequence of feature images.
The apparatus 700 further comprises an encoding module 730 configured to perform encoding of the image information of the special effect image sequence based on the redundancy of the at least one special effect image and/or the at least one image area, resulting in an encoding result corresponding to the special effect image sequence.
The apparatus 700 further includes a generation module 740 configured to generate a special effects package corresponding to the special effects style based at least on the encoding result.
In some embodiments, the detection module 720 is further configured to perform a single color detection on each of the special effects images in the sequence of special effects images to detect whether there are image regions of the same color in each of the special effects images.
In some embodiments, the detection module 720 is further configured to slide in each of the special effects images in the special effects image sequence in a first sliding window of a predetermined area shape, determining whether the image blocks defined by the first sliding window have the same color.
In some embodiments, the encoding module 730 is further configured to perform encoding of the first image region based on a single pixel value corresponding to the color of the first image region if the first image region having the same color is detected from the first special effects image in the special effects image sequence, wherein the encoding result indicates a range of the first image region in the first special effects image, and further indicates that the encoding result of the single pixel value is applied to the first image region.
In some embodiments, the detection module 720 is further configured to determine whether respective pixel positions of at least two consecutive special effect images in the special effect image sequence have the same pixel value; and determining that redundancy exists for the at least two consecutive effect images based on determining that respective pixel locations of the at least two consecutive effect images have the same pixel value.
In some embodiments, the detection module 720 further includes a determination module configured to slide through the sequence of special effects images in a second sliding window to determine whether at least two consecutive special effects images defined in the second sliding window are identical.
In some embodiments, the encoding module 730 is further configured to perform encoding on a single effect image of the at least two consecutive effect images if it is determined that there is redundancy for the at least two consecutive effect images, wherein the encoding result indicates a position of the at least two consecutive effect images in the sequence of effect images, and further indicates that the encoding result of the single effect image is applied to decode the at least two consecutive effect images.
In some embodiments, the generating module 740 is further configured to configure rendering parameters for each effect image in the sequence of effect images based at least on the encoding result; and generating a special effect package including at least the encoding result and the rendering parameters.
Fig. 8 illustrates a block diagram of an electronic device 800 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 800 illustrated in fig. 8 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The electronic device 800 illustrated in fig. 8 may be used to implement the server device 130 of fig. 1 or the apparatus 700 of fig. 7.
As shown in fig. 8, the electronic device 800 is in the form of a general-purpose electronic device. Components of electronic device 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage device 830, one or more communication units 840, one or more input devices 850, and one or more output devices 860. The processing unit 810 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 820. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device 800.
Electronic device 800 typically includes multiple computer storage media. Such a medium may be any available media that is accessible by electronic device 800, including, but not limited to, volatile and non-volatile media, removable and non-removable media. The memory 820 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 830 may be a removable or non-removable medium and may include a machine-readable medium such as a flash drive, a magnetic disk, or any other medium that may be capable of storing information and/or data (e.g., training data for training) and that may be accessed within electronic device 800.
The electronic device 800 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 8, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 820 may include a computer program product 825 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 840 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device 800 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communications connection. Thus, the electronic device 800 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 850 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 860 may be one or more output devices such as a display, speakers, printer, etc. The electronic device 800 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with the electronic device 800, or with any device (e.g., network card, modem, etc.) that enables the electronic device 800 to communicate with one or more other electronic devices, as desired, via the communication unit 840. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions, wherein the computer-executable instructions are executed by a processor to implement the method described above is provided. According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (11)

1. A method for special effect generation, comprising:
obtaining a special effect image sequence, wherein the special effect image sequence comprises one or more special effect images for applying special effect styles;
performing redundancy detection on the special effect image sequence to detect at least one special effect image and/or at least one image area with redundancy in the characteristic image sequence;
performing encoding on the image information of the special effect image sequence based on the redundancy of the at least one special effect image and/or the at least one image area to obtain an encoding result corresponding to the special effect image sequence; and
And generating a special effect package corresponding to the special effect style at least based on the coding result.
2. The method of claim 1, wherein performing redundancy detection on the sequence of special effects images comprises:
a single color detection is performed on each of the special effect images in the special effect image sequence to detect whether there is an image area having the same color in each of the special effect images.
3. The method of claim 2, wherein performing monochrome detection of each effect image in the sequence of effect images comprises:
a first sliding window in the shape of a predetermined area slides in each special effect image in the sequence of special effect images to determine whether the image blocks defined by the first sliding window have the same color.
4. The method of claim 2, wherein performing encoding on image information of the sequence of effect images comprises:
if a first image region having the same color is detected from a first special effect image in the special effect image sequence, encoding the first image region is performed based on a single pixel value corresponding to the color of the first image region,
wherein the encoding result indicates a range of the first image region in the first special effects image and further indicates that the encoding result of the single pixel value is applied to the first image region.
5. The method of claim 1, wherein performing redundancy detection on the sequence of special effects images comprises:
determining whether respective pixel positions of at least two consecutive special effect images in the special effect image sequence have the same pixel value; and
based on determining that respective pixel locations of at least two consecutive effect images have the same pixel value, it is determined that redundancy exists for the at least two consecutive effect images.
6. The method of claim 5, wherein determining whether at least two consecutive special effects images in the sequence of special effects images are identical comprises:
sliding in the sequence of special effect images according to a second sliding window to determine whether at least two consecutive special effect images defined in the second sliding window are identical.
7. The method of claim 5, wherein performing encoding on image information of the sequence of effect images comprises:
if it is determined that redundancy exists in the at least two consecutive special effect images, encoding is performed on a single special effect image of the at least two consecutive special effect images,
wherein the encoding results indicate the position of the at least two consecutive special effect images in the special effect image sequence and further indicate that the encoding results of the single special effect image are applied for decoding the at least two consecutive special effect images.
8. The method of claim 1, wherein generating a special effects package corresponding to the special effects style based at least on the encoding results comprises:
configuring rendering parameters for each special effect image in the special effect image sequence at least based on the encoding result; and
generating a special effect package comprising at least the encoding result and the rendering parameters.
9. An apparatus for special effects generation, comprising:
an acquisition module configured to acquire a sequence of special effects images, the sequence of special effects images comprising one or more special effects images for applying a special effects style;
a detection module configured to perform redundancy detection on the sequence of special effect images to detect the presence of redundant at least one special effect image and/or at least one image area in the sequence of feature images;
the coding module is configured to perform coding on the image information of the special effect image sequence based on the redundancy of the at least one special effect image and/or the at least one image area, so as to obtain a coding result corresponding to the special effect image sequence; and
and the generation module is configured to generate a special effect package corresponding to the special effect style at least based on the coding result.
10. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the electronic device to perform the method of any one of claims 1-8.
11. A computer readable storage medium having stored thereon a computer program executable by a processor to implement the method of any of claims 1 to 8.
CN202410153962.4A 2024-02-02 2024-02-02 Method, apparatus, device and storage medium for special effect generation Pending CN117830445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410153962.4A CN117830445A (en) 2024-02-02 2024-02-02 Method, apparatus, device and storage medium for special effect generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410153962.4A CN117830445A (en) 2024-02-02 2024-02-02 Method, apparatus, device and storage medium for special effect generation

Publications (1)

Publication Number Publication Date
CN117830445A true CN117830445A (en) 2024-04-05

Family

ID=90515603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410153962.4A Pending CN117830445A (en) 2024-02-02 2024-02-02 Method, apparatus, device and storage medium for special effect generation

Country Status (1)

Country Link
CN (1) CN117830445A (en)

Similar Documents

Publication Publication Date Title
CN111681167B (en) Image quality adjusting method and device, storage medium and electronic equipment
US10289659B2 (en) Delivery and display of page previews using shadow DOM
US9582600B1 (en) Cloud browser DOM-based client
US7698628B2 (en) Method and system to persist state
US10002115B1 (en) Hybrid rendering of a web page
US8891939B2 (en) Systems and methods for video-aware screen capture and compression
CN107209693B (en) Buffer optimization
US10521485B1 (en) Measuring page value
KR20210101233A (en) Method and apparatus for providing a rendering engine model comprising a description of a neural network embedded in a media item
US9740791B1 (en) Browser as a service
US10630755B2 (en) Selective consumption of web page data over a data-limited connection
CN104360847A (en) Method and equipment for processing image
JP2022509191A (en) Video decoding control methods, devices, electronic devices and storage media
CN110750664A (en) Picture display method and device
US11295492B2 (en) Electronic device and server related to rendering of web content and controlling method thereof
JP2023545052A (en) Image processing model training method and device, image processing method and device, electronic equipment, and computer program
US9779070B2 (en) Providing aggregated data to page generation code for network page generation
CN117830445A (en) Method, apparatus, device and storage medium for special effect generation
US20150128029A1 (en) Method and apparatus for rendering data of web application and recording medium thereof
US9483237B2 (en) Method and system for providing an image effects interface
US9055169B2 (en) Printing frames of a video
CN115702565A (en) Improved cross component intra prediction mode
Louafi et al. Robust QoE-aware prediction-based dynamic content adaptation framework applied to slides documents in mobile Web conferencing
CN114827113B (en) Webpage access method and device
Zhu Overlapping boundary based multimedia slice transcoding method and its system for medical video and traffic video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination