WO2022213801A1 - 视频处理方法、装置及设备 - Google Patents
视频处理方法、装置及设备 Download PDFInfo
- Publication number
- WO2022213801A1 WO2022213801A1 PCT/CN2022/082095 CN2022082095W WO2022213801A1 WO 2022213801 A1 WO2022213801 A1 WO 2022213801A1 CN 2022082095 W CN2022082095 W CN 2022082095W WO 2022213801 A1 WO2022213801 A1 WO 2022213801A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- timestamp
- target
- placeholder
- template
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 239000000463 material Substances 0.000 claims abstract description 141
- 238000009877 rendering Methods 0.000 claims abstract description 54
- 238000000034 method Methods 0.000 claims abstract description 34
- 239000013077 target material Substances 0.000 claims description 46
- 238000004590 computer program Methods 0.000 claims description 23
- 230000008676 import Effects 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 15
- 238000004519 manufacturing process Methods 0.000 claims description 14
- 230000000694 effects Effects 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/458—Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/32—Image data format
Definitions
- the embodiments of the present disclosure relate to the technical field of video processing, and in particular, to a video processing method, apparatus, device, storage medium, computer program product, and computer program.
- Embodiments of the present disclosure provide a video processing method, apparatus, device, storage medium, computer program product, and computer program, and the method can render various types of materials including text, pictures, and videos based on placeholders, Generates rendered videos including multiple types, improving user experience.
- an embodiment of the present disclosure provides a video processing method, including:
- the video template includes a plurality of placeholders, wherein each of the placeholders is used to indicate at least one of text, pictures and videos;
- the types of the multiple materials include at least one of text, pictures, and videos;
- the multiple materials are respectively imported into the corresponding placeholder positions in the video template and rendered to obtain a synthesized video.
- an embodiment of the present disclosure provides a video processing apparatus, including:
- a receiving module for receiving a video generation request
- the first obtaining module is configured to obtain a video template according to the video generation request, wherein the video template includes a plurality of placeholders, wherein each of the placeholders is used to indicate at least one of the texts, pictures and videos.
- a second acquiring module configured to acquire multiple materials according to the video generation request, wherein the types of the multiple materials include at least one of text, pictures and videos;
- a rendering module configured to import the multiple materials into the corresponding placeholder positions in the video template based on the types of the materials and render them to obtain a synthesized video.
- embodiments of the present disclosure provide an electronic device, including: a processor and a memory;
- the memory stores computer-executable instructions
- the processor executes the computer-executable instructions, so that the electronic device executes the video processing method described in the first aspect.
- an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the implementation is as described in the first aspect above video processing method.
- an embodiment of the present disclosure provides a computer program product, including a computer program that, when executed by a processor, implements the video processing method described in the first aspect.
- an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the video processing method described in the first aspect.
- the method first receives a video generation request from a user, and then acquires a video template and multiple materials according to the video generation request, wherein the video template contains Including a plurality of placeholders, each placeholder is used to indicate at least one of text, picture and video, and the plurality of materials includes at least one of text, picture and video; then based on the material type, the plurality of materials are Import the positions of the corresponding placeholders in the video template and render them to obtain the synthesized video.
- the embodiments of the present disclosure render multiple types of materials including text, pictures, and videos based on placeholders to obtain rendered videos, and can generate multiple types of rendered videos, which improves user experience.
- FIG. 1 is a schematic scene diagram of a video processing method according to an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart 1 of a video processing method provided by an embodiment of the present disclosure
- FIG. 3 is a second schematic flowchart of a video processing method provided by an embodiment of the present disclosure.
- FIG. 4 is a structural block diagram of a video processing apparatus provided by an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides a video processing method, obtaining a video template according to a user request, the video template includes a plurality of placeholders, and each placeholder is used to indicate a text, a picture, and a video in the video. At least one; obtain multiple materials according to the user's request, where the multiple materials include at least one of text, pictures, and videos, and import the various types of materials including text, pictures and videos into the corresponding account in the video template respectively. position the bitmap and render it, resulting in a composite video.
- the placeholders in the video template are used to render various types of materials including text, pictures and videos to obtain a rendered video, which improves the user experience.
- FIG. 1 is a schematic scene diagram of a video processing method provided by an embodiment of the present disclosure.
- the system provided in this embodiment includes a client 101 and a server 102 .
- the client 101 may be installed on devices such as mobile phones, tablet computers, personal computers, wearable electronic devices, and smart home devices. This embodiment does not specifically limit the implementation of the client terminal 101, as long as the client terminal 101 can perform input and output interaction with the user.
- Server 102 may comprise a single server or a cluster of several servers.
- FIG. 2 is a first schematic flowchart of a video processing method provided by an embodiment of the present disclosure.
- the method of this embodiment can be applied to the server shown in FIG. 1 , and the video processing method includes:
- the client may be installed on a mobile terminal, such as a personal computer, a tablet computer, a mobile phone, a wearable electronic device, a smart home device, and other devices.
- the client can send the user's video generation request to the server.
- S202 Acquire a video template according to the video generation request, wherein the video template includes multiple placeholders, wherein each placeholder is used to indicate at least one of text, picture and video.
- a video template may be obtained from a video template library, where the video template includes multiple placeholders.
- each placeholder may be set with a corresponding type label, and the type label is used to indicate that the corresponding placeholder belongs to at least one type of text, picture and video.
- the placeholder may have a preset format and include at least one of the following parameters: used to indicate the type of material supported by the placeholder (eg, at least one of text, picture, and video) The type label of ; a placeholder identifier, which is used to indicate the corresponding rendering effect, material resolution, etc.
- the placeholder when the placeholder supports video-type materials, the placeholder may further include the following parameters: the start time of the video material, the end time of the video material, the start time of the video to be synthesized, and the end time of the video to be synthesized .
- acquiring a video template from a video template library may include the following two methods:
- a video template may be randomly selected from the video template library in response to the video generation request.
- the user can select the corresponding video template or video template type on the client side, and according to the user's selection on the client side, add the user selection information to the video generation request; after receiving the video generation request, the server parses and obtains the user selection information, The video template determined by the user at the client is selected from the video template library according to the user selection information.
- the process of how to create a video template includes:
- video template production materials where the video template production materials include at least one of rendering materials and cutscenes; pre-adding multiple placeholders; and creating a video template according to the video template production materials and the pre-added multiple placeholders.
- the pre-added placeholders are used to indicate at least three types of information as follows, including: pictures, texts, and videos.
- the text includes letters, numbers, symbols, and the like.
- S203 Acquire multiple materials according to the video generation request, where the types of the multiple materials include at least one of text, picture, and video.
- the material may be obtained from a material library (which may include a database).
- the multiple materials may include at least one of multiple types of text, pictures, and videos
- the placeholders in the corresponding video template may also include multiple types of indicated text, pictures, and videos. at least one of the placeholders for .
- the type of the material import each type of material into the position of the placeholder corresponding to the type, so that the material replaces the placeholder, and the image frames of the video template after the imported material are rendered frame by frame, and then the composite is obtained. 's video.
- the user's video generation request is received first, and then a video template and multiple materials are obtained according to the video generation request, wherein the video template includes multiple placeholders, and each placeholder is used to indicate text, pictures and videos.
- At least one of the multiple materials includes at least one of text, pictures and videos; then based on the type of materials, the multiple materials are imported into the corresponding placeholder positions in the video template and rendered to obtain a synthesized video .
- the embodiments of the present disclosure render multiple types of materials including text, pictures, and videos based on placeholders to obtain rendered videos, which can provide users with rendered videos including multiple types of materials, thereby improving user experience.
- the video processing method according to the embodiment of the present disclosure has been described above in conjunction with the server. Those skilled in the art should understand that the video processing method according to the embodiment of the present disclosure can also be executed by a device with a client installed, or can also be executed by an integrated
- the all-in-one device performs the server function and the client function. For the sake of brevity, the specific steps and methods will not be repeated.
- FIG. 3 is a second schematic flowchart of a video processing method provided by an embodiment of the present disclosure.
- the above-mentioned material includes a first type label, which is used to indicate the type of the material; the placeholder includes a second type label, which is used to indicate the type indicated by the placeholder.
- step S204 based on the material type, multiple materials are imported into the corresponding placeholder positions in the video template and rendered to obtain a synthesized video, which may include:
- S301 Filter out target materials and target placeholders whose first type labels are consistent with the second type labels.
- the placeholders in the video template can be identified, which specifically includes: acquiring each video template image frame from the video template according to the video timestamp of the video to be synthesized, and determining whether there is a placeholder in each video template image frame , if it exists, the placeholder is identified, and the second type label of the placeholder is obtained.
- the consistency between the second type label and the first type label may include consistency of the type information indicated by the label.
- both the first type of tags and the second type of tags include tag tags.
- each material includes a corresponding first type tag
- the first type tag may be added by material generation.
- the first type tag may be the first number information of each material, and the first number information may be customized by the client to indicate any material.
- the placeholder in the video template may be a placeholder added when the video template is produced, and each placeholder is configured with a predefined second type label, and each predefined second type label is used for Indicates the type of asset this placeholder matches.
- the second type tag may be second number information that matches the first number information of the material.
- all placeholders may be traversed according to the first type label of the material until a second type label consistent with the first type label of the material is queried.
- the specific screening process is similar to the above. , for the sake of brevity, will not be repeated here.
- materials can be classified into three types: text materials, picture materials, and video materials.
- different preprocessing methods are used to process the target material to import the position of the corresponding target placeholder in the video template.
- S303 Render the image frame of the video template imported into the target material to obtain a synthesized video.
- each frame of the video template imported into the target material is rendered to obtain a synthesized video.
- the video template into which the target material is imported has a target placeholder, and a corresponding rendering effect is used for rendering according to the target placeholder.
- the renderer corresponding to the target placeholder of the video template is identified; according to the rendering effect of the renderer, the image frame of the video template imported into each material is analyzed. to render.
- the renderer may include a shader renderer, and the shader renderer is used to indicate rendering effect attributes such as the position, shape, transparency, and dynamic effect of the placeholder material.
- the material can be imported into the corresponding position of the video template, and corresponding rendering can be performed to improve the presentation effect of the synthesized video.
- step S302 is mainly described in detail, after each target material is preprocessed and imported into the position of the target placeholder in the video template, the specific details are as follows:
- typesetting processing may be performed on the text material according to the characteristics such as the size or shape of the placeholder, and the typesetting processing text material may be converted into a texture format.
- the target material includes a picture material, after the picture material is converted into a texture format, the position of the target placeholder in the video template is imported.
- the image file format may include BMP, TGA, JPG, GIF, PNG and other formats; after being converted to texture format, the texture format may include R5G6B5, A4R4G4B4, A1R5G5B5, R8G8B8, A8R8G8B8 and other formats.
- a known or future-developed processing method for texture-transforming can be used to perform texture-transforming processing on the picture material, and the present disclosure does not limit the specific texture-transforming processing method.
- the target material includes a video material, extract an image frame from the video material, and after the image frame is converted into a texture format, import the position of the target placeholder in the video template.
- image frames of the corresponding video material need to be screened out from the video material according to the timestamp of the video to be synthesized.
- the specific process of extracting image frames from the video material includes: determining the first start timestamp and the first end timestamp of the video material in the video to be synthesized; determining the second start timestamp and the second start timestamp indicated by the placeholder End timestamp; calculate the image extracted from the video material according to the timestamp of the currently rendered frame of the video to be synthesized, the first start timestamp and the first end timestamp, and the second start timestamp and the second end timestamp
- the target timestamp of the frame image frames are extracted from the video footage based on the target timestamp.
- Timestamp including:
- the target timestamp is obtained The proportional time length of the video footage
- the target timestamp is obtained according to the proportional time length of the second start timestamp and the target timestamp in the video material.
- Timestamp the specific formula of its calculation process can be:
- t src is the target timestamp of the extracted image frame; dest in is the first start timestamp; dest out is the first end timestamp; src in is the second start timestamp; src out is the second end timestamp ; curTime is the timestamp of the currently rendered frame.
- extracting an image frame from a video material according to a target timestamp includes: if the time length of the video material is less than the time length indicated by the placeholder corresponding to the video material in the video template, re-extracting the image frame from the video material Continue to extract image frames from the starting point.
- the time length indicated by the placeholder may be obtained according to the difference between the second start timestamp indicated by the placeholder and the second end timestamp.
- the image frames are extracted from the video material according to the time stamp of the extracted image frame. If the time length of the video material is less than the time length indicated by the placeholder in the video template, the image frame is resumed to the starting point of the video material and continues to be extracted, that is, the proposed
- the image frame idx (tsrc%T) x fps (% is the remainder), where T is the time length of the video material, and fps is the frame rate.
- the video processing method provided by the above embodiment may be executed by the server, and the video generation request comes from the client. Accordingly, in the above step S204, multiple materials are respectively imported into the corresponding placeholders in the video template.
- the method further includes: sending the synthesized video to the client. By sending the synthesized video to the client, the user experience is further improved.
- FIG. 4 is a structural block diagram of a video processing apparatus provided by an embodiment of the present disclosure.
- the apparatus includes: a receiving module 401 , a first obtaining module 402 , a second obtaining module 403 and a rendering module 404 .
- the receiving module 401 is used for receiving a video generation request
- the first obtaining module 402 is configured to obtain a video template according to the video generation request, wherein the video template includes a plurality of placeholders, wherein each of the placeholders is used to indicate text, pictures and videos. at least one;
- a second acquiring module 403, configured to acquire multiple materials according to the video generation request, wherein the types of the multiple materials include at least one of text, pictures, and videos;
- the rendering module 404 is configured to, based on the types of the materials, respectively import the multiple materials into the corresponding placeholder positions in the video template and render them to obtain a synthesized video.
- the material includes a first type tag, and the placeholder includes a second type tag;
- the rendering module 404 includes:
- a screening unit 4041 configured to filter out target materials and target placeholders whose labels of the first type are consistent with those of the second type;
- Importing unit 4042 configured to import the target material into the position of the target placeholder in the video template after preprocessing
- the rendering unit 4043 is configured to render the image frame of the video template imported into the target material to obtain the synthesized video.
- the rendering unit 4043 includes:
- the first rendering sub-unit 40431 is configured to import the position of the target placeholder in the video template after typesetting and texturing the text material if the target material includes text material;
- the second rendering subunit 40432 is configured to import the position of the target placeholder in the video template after the image material is converted into a texture format if the target material includes a picture material;
- the third rendering subunit 40433 is configured to extract an image frame from the video material if the target material includes a video material, and after converting the extracted image frame to a texture format, import it into the video template The location of the target placeholder.
- the third rendering subunit 40433 is specifically configured to determine the first start timestamp and the first end timestamp of the video material in the video to be synthesized; the second start time stamp and the second end time stamp indicated by the bit symbol; according to the time stamp of the currently rendered frame of the video to be synthesized, the first start time stamp and the first end time stamp, and the The second start time stamp and the second end time stamp are used to calculate the target time stamp of the image frame extracted from the video material; and the image frame is extracted from the video material according to the target time stamp.
- the third rendering subunit 40433 is specifically configured to: obtain, according to the second end timestamp and the second start timestamp, the information indicated by the placeholder time length; according to the ratio of the difference between the timestamp of the current rendering frame and the first start timestamp and the difference between the first end timestamp and the first start timestamp, and the ratio of the
- the product of the time lengths indicated by the bit identifiers is used to obtain the proportional time length of the target time stamp in the video material; according to the second start time stamp and the target time stamp in the proportional time length of the video material , to obtain the target timestamp.
- the calculation is performed to extract the image frame from the video material.
- the formula for the target timestamp is:
- t src is the target timestamp of the extracted image frame; dest in is the first start timestamp; dest out is the first end timestamp; src in is the second start timestamp; src out is the second end timestamp ; curTime is the timestamp of the currently rendered frame.
- the third rendering subunit 40433 is further configured to, if the time length of the video material is less than the time length indicated by the placeholder corresponding to the video material in the video template, Then, continue to extract image frames from the starting point of the video material.
- the rendering unit 4043 is specifically configured to identify the renderer corresponding to the target placeholder in the video template; after importing the target material according to the rendering effect pair of the renderer The image frame of the video template to render.
- the apparatus further includes: a production module 405, configured to obtain a video template production material, wherein the video template production material includes at least one of rendering material and cutscenes; pre-adding The plurality of placeholders; the video template is produced according to the video template with the material and the pre-added plurality of placeholders.
- a production module 405 configured to obtain a video template production material, wherein the video template production material includes at least one of rendering material and cutscenes; pre-adding The plurality of placeholders; the video template is produced according to the video template with the material and the pre-added plurality of placeholders.
- the apparatus is applied to a server, the video generation request is from a client, and the apparatus further includes:
- a sending module 406 configured to respectively import the multiple materials into the corresponding placeholder positions in the video template and render them, and after obtaining the synthesized video, send the synthesized video to the client.
- the embodiments of the present disclosure further provide an electronic device.
- the electronic device 500 may be a client device or a server.
- the client device may include, but is not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, referred to as PDA), tablet computers (Portable Android Device, referred to as PAD), portable multimedia players ( Portable Media Player (PMP for short), in-vehicle clients (such as in-vehicle navigation clients), mobile clients of wearable electronic devices, etc., as well as fixed clients such as digital TV (Television), desktop computers, smart home devices, etc.
- PDA Personal Digital Assistant
- PAD Portable Media Player
- PMP Portable Media Player
- in-vehicle clients such as in-vehicle navigation clients
- mobile clients of wearable electronic devices etc.
- fixed clients such as digital TV (Television), desktop computers, smart home devices, etc.
- the electronic device shown in FIG. 5 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
- the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which may be stored in a read-only memory (Read Only Memory, ROM for short) 502 according to a program or from a storage device 508 loads a program into a random access memory (Random Access Memory, RAM for short) 503 to perform various appropriate actions and processes, thereby implementing the video processing method according to the embodiment of the present disclosure.
- ROM Read Only Memory
- RAM Random Access Memory
- various programs and data required for the operation of the electronic device 500 are also stored.
- the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
- An Input/Output (I/O for short) interface 505 is also connected to the bus 504 .
- I/O interface 505 the following devices can be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD for short) ), speaker, vibrator, etc. output device 507; storage device 508 including, eg, magnetic tape, hard disk, etc.; and communication device 509. Communication means 509 may allow electronic device 500 to communicate wirelessly or by wire with other devices to exchange data. While FIG. 5 shows electronic device 500 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
- input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.
- LCD Liquid Crystal Display
- speaker vibrator
- output device 507 storage device 508 including, eg, magnetic tape, hard
- embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via the communication device 509, or from the storage device 508, or from the ROM 502.
- the processing apparatus 501 When the computer program is executed by the processing apparatus 501, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
- Computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (Erasable Programmable ROM, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc ROM, CD-ROM for short), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that includes or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
- the program code included on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: electric wire, optical cable, radio frequency (RF for short), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic apparatus; or may exist alone without being incorporated into the electronic apparatus.
- the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, causes the electronic device to execute the methods shown in the foregoing embodiments.
- Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, can be connected to an external A computer (eg using an internet service provider to connect via the internet).
- LAN Local Area Network
- WAN Wide Area Network
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that includes one or more logical functions for implementing the specified functions executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.
- the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit that obtains at least two Internet Protocol addresses".
- exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Products ( Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Products
- SOC System on Chip
- CPLD Complex Programmable Logic Device
- a machine-readable medium may be a tangible medium that may include or store a program for use by or in connection with the instruction execution system, apparatus or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- a video processing method including:
- the video template includes a plurality of placeholders, wherein each of the placeholders is used to indicate at least one of text, pictures and videos;
- the types of the multiple materials include at least one of text, pictures, and videos;
- the multiple materials are respectively imported into the corresponding placeholder positions in the video template and rendered to obtain a synthesized video.
- the material includes a first type tag, and the placeholder includes a second type tag; the plurality of materials are respectively imported into the material based on the type of the material
- the position of the corresponding placeholder in the video template is rendered, and the synthesized video is obtained, including: filtering out the target material and target placeholder whose first type label is consistent with the second type label;
- the target material is preprocessed and imported into the video template at the position of the target placeholder; the image frame of the video template imported into the target material is rendered to obtain the synthesized video.
- the preprocessing of the target material and then importing the target material into the position of the target placeholder in the video template includes: if the target material includes text material, After the text material is processed in typesetting and texture format, it is imported into the position of the target placeholder in the video template; if the target material includes a picture material, after the picture material is processed in a texture format, Import the position of the target placeholder in the video template; if the target material includes video material, extract image frames from the video material, and after the extracted image frames are processed in a texture format, import the target material. position of the target placeholder in the video template.
- the extracting image frames from the video material includes: determining a first start timestamp and a first end timestamp of the video material in the video to be synthesized; the second start timestamp and the second end timestamp indicated by the placeholder; according to the timestamp of the currently rendered frame of the video to be synthesized, the first start timestamp and the first end timestamp, and the second start time stamp and the second end time stamp, calculate the target time stamp of the image frame extracted from the video material; extract the image frame from the video material according to the target time stamp.
- the timestamp of the currently rendered frame according to the video to be synthesized, the first start timestamp and the first end timestamp, and the second start timestamp and the second end time stamp calculating the target time stamp for extracting the image frame from the video material, comprising: obtaining the target time stamp indicated by the placeholder according to the second end time stamp and the second start time stamp time length; according to the ratio of the difference between the timestamp of the current rendering frame and the first start timestamp and the difference between the first end timestamp and the first start timestamp, and the ratio of the
- the product of the time lengths indicated by the bit identifiers is used to obtain the proportional time length of the target time stamp in the video material; according to the second start time stamp and the target time stamp in the proportional time length of the video material , to obtain the target timestamp.
- the extracting the image frame from the video material according to the target timestamp includes: if the time length of the video material is less than the corresponding occupancy of the video material in the video template If the time length indicated by the bit symbol is exceeded, the image frame is continuously extracted from the starting point of the video material.
- the rendering of the image frame of the video template imported into the target material includes: identifying a renderer corresponding to the target placeholder in the video template; according to the renderer
- the rendering special effect renders the image frame of the video template after importing the target material.
- the method before the receiving a video generation request, the method further includes: acquiring a video template production material, wherein the video template production material includes at least one of a rendering material and a cutscene; generating the multiple placeholders; and generating the video template according to the video template creating the material and the multiple pre-added placeholders.
- a video processing apparatus including:
- a receiving module for receiving a video generation request
- the first obtaining module is configured to obtain a video template according to the video generation request, wherein the video template includes a plurality of placeholders, wherein each of the placeholders is used to indicate at least one of the texts, pictures and videos.
- a second acquiring module configured to acquire multiple materials according to the video generation request, wherein the types of the multiple materials include at least one of text, pictures and videos;
- a rendering module configured to import the multiple materials into the corresponding placeholder positions in the video template based on the types of the materials and render them to obtain a synthesized video.
- the material includes a first type tag
- the placeholder includes a second type tag
- the rendering module includes: a filtering unit for filtering out the first type A target material and a target placeholder whose label is consistent with the label of the second type; an import unit, configured to preprocess the target material and import it into the position of the target placeholder in the video template; a rendering unit, It is used for rendering the image frame of the video template imported into the target material to obtain the synthesized video.
- the rendering unit includes: a first rendering sub-unit configured to, if the target material includes a text material, perform typesetting and texture conversion on the text material, importing the position of the target placeholder in the video template; the second rendering subunit is configured to import the video template after the image material is converted into a texture format if the target material includes a picture material The position of the target placeholder described in ; the third rendering subunit is used for extracting image frames from the video material if the target material includes video material, and converting the extracted image frame to a texture format After processing, the location of the target placeholder in the video template is imported.
- the third rendering subunit is specifically configured to determine a first start timestamp and a first end timestamp of the video material in the video to be synthesized; determine the place occupancy the second start timestamp and the second end timestamp indicated by the symbol; according to the timestamp of the currently rendered frame of the video to be synthesized, the first start timestamp and the first end timestamp, and the The second start time stamp and the second end time stamp are used to calculate the target time stamp of the image frame extracted from the video material; and the image frame is extracted from the video material according to the target time stamp.
- the third rendering sub-unit is specifically configured to: obtain, according to the second end timestamp and the second start timestamp, the value indicated by the placeholder time length; according to the ratio of the difference between the timestamp of the current rendering frame and the first start timestamp and the difference between the first end timestamp and the first start timestamp, and the placeholder According to the product of the time length indicated by the symbol, the proportional time length in which the target time stamp is located in the video material is obtained; according to the proportional time length of the second start time stamp and the target time stamp in the video material, Get the target timestamp.
- the third rendering subunit is further configured to: if the time length of the video material is less than the time length indicated by the placeholder corresponding to the video material in the video template, then Continue to extract image frames from the beginning of the video material again.
- the rendering unit is specifically configured to identify a renderer corresponding to a target placeholder in the video template;
- the image frame of the video template is rendered.
- the apparatus further includes: a production module, configured to obtain a video template production material, wherein the video template production material includes at least one of a rendering material and a cutscene; generating the multiple placeholders; and generating the video template according to the video template creating the material and the multiple pre-added placeholders.
- a production module configured to obtain a video template production material, wherein the video template production material includes at least one of a rendering material and a cutscene; generating the multiple placeholders; and generating the video template according to the video template creating the material and the multiple pre-added placeholders.
- an electronic device comprising: a processor and a memory;
- the memory stores computer-executable instructions
- the processor executes the computer-executable instructions, so that the electronic device executes the video processing method described in the first aspect and various possible designs of the first aspect.
- a computer-readable storage medium where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, The video processing methods described above in the first aspect and various possible designs of the first aspect are implemented.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the video processing method described in the first aspect and various possible designs of the first aspect.
- an embodiment of the present disclosure provides a computer program that, when executed by a processor, implements the video processing method described in the first aspect and various possible designs of the first aspect.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (13)
- 一种视频处理方法,所述方法包括:接收视频生成请求;根据所述视频生成请求获取视频模板,其中所述视频模板中包括多个占位符,其中,每个所述占位符用于指示文本、图片和视频中的至少一种;根据所述视频生成请求获取多个素材,其中所述多个素材的类型包括文本、图片和视频中的至少一种;以及基于所述素材的类型,将所述多个素材分别导入所述视频模板中对应的占位符的位置并进行渲染,得到合成的视频。
- 根据权利要求1所述的方法,其中,所述素材包括第一类型标签,所述占位符包括第二类型标签;所述基于所述素材的类型,将所述多个素材分别导入所述视频模板中对应的占位符的位置并进行渲染,得到所述合成的视频,包括:筛选出所述第一类型标签与所述第二类型标签一致的目标素材和目标占位符;将所述目标素材进行预处理后导入所述视频模板中所述目标占位符的位置;对导入所述目标素材的视频模板的图像帧进行渲染,得到所述合成的视频。
- 根据权利要求2所述的方法,其中,所述将所述目标素材进行预处理后导入所述视频模板中所述目标占位符的位置,包括:若所述目标素材包括文本素材,则对所述文本素材进行排版、转纹理格式处理后,导入所述视频模板中所述目标占位符的位置;若所述目标素材包括图片素材,则对所述图片素材进行转纹理格式处理后,导入所述视频模板中所述目标占位符的位置;若所述目标素材包括视频素材,则从所述视频素材中抽取图像帧,并对抽取的图像帧进行转纹理格式处理后,导入所述视频模板中所述目标占位符的位置。
- 根据权利要求3所述的方法,其中,所述从所述视频素材中抽取图像帧,包括:确定所述视频素材在待合成视频中的第一起始时间戳和第一结束时间戳;确定所述占位符所指示的第二起始时间戳和第二结束时间戳;根据所述待合成视频的当前渲染帧的时间戳、所述第一起始时间戳和所述第一结束时间戳、以及所述第二起始时间戳和所述第二结束时间戳,计算从所述视频素材中抽取的图像帧的目标时间戳;根据所述目标时间戳从所述视频素材中抽取图像帧。
- 根据权利要求4所述的方法,其中,所述根据待合成视频的当前渲染帧的时间戳、所述第一起始时间戳和所述第一结束时间戳、以及所述第二起始时间戳和所述第 二结束时间戳,计算从视频素材中抽取图像帧的目标时间戳,包括:根据所述第二结束时间戳和所述第二起始时间戳,获得所述占位符所指示的时间长度;根据所述当前渲染帧的时间戳和所述第一起始时间戳的差值与所述第一结束时间戳和所述第一起始时间戳的差值的比值,与所述占位符所指示的时间长度的乘积,得到所述目标时间戳位于所述视频素材的比例时间长度;根据所述第二起始时间戳和所述目标时间戳占所述视频素材的比例时间长度,获得所述目标时间戳。
- 根据权利要求4或5所述的方法,其中,所述根据所述目标时间戳从所述视频素材中抽取图像帧,包括:若所述视频素材的时间长度小于所述视频模板中视频素材对应的占位符所指示的时间长度,则重新从所述视频素材的起点继续抽取图像帧。
- 根据权利要求2至6中任一项所述的方法,其中,所述对导入所述目标素材的视频模板的图像帧进行渲染,包括:识别所述视频模板中所述目标占位符对应的渲染器;根据所述渲染器的渲染特效对导入所述目标素材后的视频模板的图像帧进行渲染。
- 根据权利要求1至7中任一项所述的方法,其中,所述接收视频生成请求之前,还包括:获取视频模板制作素材,其中所述视频模板制作素材包括渲染素材和过场动画中的至少一种;预添加所述多个占位符;根据所述视频模板制作素材和预添加的所述多个占位符制作所述视频模板。
- 一种视频处理装置,所述装置包括:接收模块,用于接收视频生成请求;第一获取模块,用于根据所述视频生成请求获取视频模板,其中所述视频模板中包括多个占位符,其中,每个所述占位符用于指示文本、图片和视频中的至少一种;第二获取模块,用于根据所述视频生成请求获取多个素材,其中所述多个素材的类型包括文本、图片和视频中的至少一种;渲染模块,用于基于所述素材的类型,将所述多个素材分别导入所述视频模板中对应的占位符的位置并进行渲染,得到合成的视频。
- 一种电子设备,包括:处理器和存储器;所述存储器存储计算机执行指令;所述处理器执行所述存储器存储的计算机执行指令,使得所述电子设备执行如权利要求1至8中任一项所述的视频处理方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至8中任一项所述的视频处理方法。
- 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如权利要求1至8中任一项所述的视频处理方法。
- 一种计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至8中任一项所述的视频处理方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/551,967 US20240177374A1 (en) | 2021-04-09 | 2022-03-21 | Video processing method, apparatus and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110385345.3 | 2021-04-09 | ||
CN202110385345.3A CN115209215B (zh) | 2021-04-09 | 2021-04-09 | 视频处理方法、装置及设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022213801A1 true WO2022213801A1 (zh) | 2022-10-13 |
Family
ID=83545140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/082095 WO2022213801A1 (zh) | 2021-04-09 | 2022-03-21 | 视频处理方法、装置及设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240177374A1 (zh) |
CN (1) | CN115209215B (zh) |
WO (1) | WO2022213801A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116091738A (zh) * | 2023-04-07 | 2023-05-09 | 湖南快乐阳光互动娱乐传媒有限公司 | 一种虚拟ar生成方法、系统、电子设备及存储介质 |
WO2024160128A1 (zh) * | 2023-02-03 | 2024-08-08 | 北京字跳网络技术有限公司 | 用于生成视频模板的方法、装置和电子设备 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115988255A (zh) * | 2022-12-23 | 2023-04-18 | 北京字跳网络技术有限公司 | 特效生成方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103928039A (zh) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | 一种视频合成方法及装置 |
US20170238067A1 (en) * | 2016-02-17 | 2017-08-17 | Adobe Systems Incorporated | Systems and methods for dynamic creative optimization for video advertisements |
CN107770626A (zh) * | 2017-11-06 | 2018-03-06 | 腾讯科技(深圳)有限公司 | 视频素材的处理方法、视频合成方法、装置及存储介质 |
CN109168028A (zh) * | 2018-11-06 | 2019-01-08 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置、服务器及存储介质 |
CN110072120A (zh) * | 2019-04-23 | 2019-07-30 | 上海偶视信息科技有限公司 | 一种视频生成方法、装置、计算机设备和存储介质 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9576302B2 (en) * | 2007-05-31 | 2017-02-21 | Aditall Llc. | System and method for dynamic generation of video content |
CN101448089B (zh) * | 2007-11-26 | 2013-03-06 | 新奥特(北京)视频技术有限公司 | 一种非线性编辑系统 |
EP2238743A4 (en) * | 2007-12-17 | 2011-03-30 | Stein Gausereide | REAL-TIME VIDEO INCLUSION SYSTEM |
EP2428957B1 (en) * | 2010-09-10 | 2018-02-21 | Nero Ag | Time stamp creation and evaluation in media effect template |
US9277198B2 (en) * | 2012-01-31 | 2016-03-01 | Newblue, Inc. | Systems and methods for media personalization using templates |
CN111131727A (zh) * | 2018-10-31 | 2020-05-08 | 北京国双科技有限公司 | 视频数据处理方法和装置 |
CN109769141B (zh) * | 2019-01-31 | 2020-07-14 | 北京字节跳动网络技术有限公司 | 一种视频生成方法、装置、电子设备及存储介质 |
CN110060317A (zh) * | 2019-03-16 | 2019-07-26 | 平安城市建设科技(深圳)有限公司 | 海报自动配置方法、设备、存储介质及装置 |
CN110708596A (zh) * | 2019-09-29 | 2020-01-17 | 北京达佳互联信息技术有限公司 | 生成视频的方法、装置、电子设备及可读存储介质 |
CN111222063A (zh) * | 2019-11-26 | 2020-06-02 | 北京达佳互联信息技术有限公司 | 富文本渲染方法、装置、电子设备及存储介质 |
CN111669623B (zh) * | 2020-06-28 | 2023-10-13 | 腾讯科技(深圳)有限公司 | 视频特效的处理方法、装置以及电子设备 |
CN111966931A (zh) * | 2020-08-23 | 2020-11-20 | 云知声智能科技股份有限公司 | 控件的渲染方法及装置 |
-
2021
- 2021-04-09 CN CN202110385345.3A patent/CN115209215B/zh active Active
-
2022
- 2022-03-21 WO PCT/CN2022/082095 patent/WO2022213801A1/zh active Application Filing
- 2022-03-21 US US18/551,967 patent/US20240177374A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103928039A (zh) * | 2014-04-15 | 2014-07-16 | 北京奇艺世纪科技有限公司 | 一种视频合成方法及装置 |
US20170238067A1 (en) * | 2016-02-17 | 2017-08-17 | Adobe Systems Incorporated | Systems and methods for dynamic creative optimization for video advertisements |
CN107770626A (zh) * | 2017-11-06 | 2018-03-06 | 腾讯科技(深圳)有限公司 | 视频素材的处理方法、视频合成方法、装置及存储介质 |
CN109168028A (zh) * | 2018-11-06 | 2019-01-08 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置、服务器及存储介质 |
CN110072120A (zh) * | 2019-04-23 | 2019-07-30 | 上海偶视信息科技有限公司 | 一种视频生成方法、装置、计算机设备和存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024160128A1 (zh) * | 2023-02-03 | 2024-08-08 | 北京字跳网络技术有限公司 | 用于生成视频模板的方法、装置和电子设备 |
CN116091738A (zh) * | 2023-04-07 | 2023-05-09 | 湖南快乐阳光互动娱乐传媒有限公司 | 一种虚拟ar生成方法、系统、电子设备及存储介质 |
CN116091738B (zh) * | 2023-04-07 | 2023-06-16 | 湖南快乐阳光互动娱乐传媒有限公司 | 一种虚拟ar生成方法、系统、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240177374A1 (en) | 2024-05-30 |
CN115209215A (zh) | 2022-10-18 |
CN115209215B (zh) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022213801A1 (zh) | 视频处理方法、装置及设备 | |
WO2020082870A1 (zh) | 即时视频显示方法、装置、终端设备及存储介质 | |
WO2020233142A1 (zh) | 多媒体文件播放方法、装置、电子设备和存储介质 | |
WO2021179882A1 (zh) | 图像的绘制方法、装置、可读介质和电子设备 | |
WO2020151599A1 (zh) | 视频同步发布方法、装置、电子设备及可读存储介质 | |
US11678024B2 (en) | Subtitle information display method and apparatus, and electronic device, and computer readable medium | |
US11928152B2 (en) | Search result display method, readable medium, and terminal device | |
US11785195B2 (en) | Method and apparatus for processing three-dimensional video, readable storage medium and electronic device | |
WO2022057575A1 (zh) | 一种多媒体数据的发布方法、装置、设备及介质 | |
CN110321447A (zh) | 重复图像的确定方法、装置、电子设备及存储介质 | |
US11893770B2 (en) | Method for converting a picture into a video, device, and storage medium | |
WO2023138441A1 (zh) | 视频生成方法、装置、设备及存储介质 | |
WO2024001545A1 (zh) | 歌单展示信息生成方法、装置、电子设备及存储介质 | |
WO2024193511A1 (zh) | 互动方法、装置、电子设备、计算机可读介质 | |
CN110287350A (zh) | 图像检索方法、装置及电子设备 | |
WO2023098576A1 (zh) | 图像处理方法、装置、设备及介质 | |
CN112492399B (zh) | 信息显示方法、装置及电子设备 | |
WO2023138468A1 (zh) | 虚拟物体的生成方法、装置、设备及存储介质 | |
WO2022042398A1 (zh) | 用于确定对象添加方式的方法、装置、电子设备和介质 | |
WO2021031909A1 (zh) | 数据内容的输出方法、装置、电子设备及计算机可读介质 | |
EP3229478B1 (en) | Cloud streaming service system, image cloud streaming service method using application code, and device therefor | |
US12020347B2 (en) | Method and apparatus for text effect processing | |
WO2021018176A1 (zh) | 文字特效处理方法及装置 | |
US12126876B2 (en) | Theme video generation method and apparatus, electronic device, and readable storage medium | |
US20240276067A1 (en) | Information processing method and apparatus, device, medium, and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22783868 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18551967 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22783868 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22783868 Country of ref document: EP Kind code of ref document: A1 |