CN112214643B - Video patch generation method and device, electronic equipment and storage medium - Google Patents

Video patch generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112214643B
CN112214643B CN202011105779.5A CN202011105779A CN112214643B CN 112214643 B CN112214643 B CN 112214643B CN 202011105779 A CN202011105779 A CN 202011105779A CN 112214643 B CN112214643 B CN 112214643B
Authority
CN
China
Prior art keywords
patch
video
strategy
optimal
paster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011105779.5A
Other languages
Chinese (zh)
Other versions
CN112214643A (en
Inventor
刘子阳
韩珺方
刘勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu China Co Ltd
Original Assignee
Baidu China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baidu China Co Ltd filed Critical Baidu China Co Ltd
Priority to CN202011105779.5A priority Critical patent/CN112214643B/en
Publication of CN112214643A publication Critical patent/CN112214643A/en
Application granted granted Critical
Publication of CN112214643B publication Critical patent/CN112214643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a video patch generation method, a device, electronic equipment and a storage medium, and relates to the field of video application. The specific implementation scheme is as follows: extracting video characteristics of the video; determining an optimal paster strategy of the video according to the video characteristics, wherein the paster strategy is the paster strategy with the highest priority in the paster strategies corresponding to the video characteristics; and generating a patch by utilizing an optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch. The present application can provide an automated, configurable patch video generation scheme.

Description

Video patch generation method and device, electronic equipment and storage medium
Technical Field
The present Application relates to the field of video services, and in particular, to the fields of video Applications (APP), short video applications, and the like.
Background
The video industry is currently rapidly developing, and the patch is an important resource as an important advertisement display and brand highlighting form in video (especially short video).
The existing patch generation and addition schemes generally have two kinds, one is to mount patches in a video player, for example, to play extra patch resources mechanically before video playing, after video playing is completed, and the like. Another is to add a patch to the video resource itself by video composition technology, which is usually that all videos use a unified patch (or just show the simple information of author, title, etc. in the patch individually); or a specific patch production company is found for a certain batch of short videos to add personalized patches.
Both of the above schemes have drawbacks. For example, the first solution described above mounts a patch in a video player, and if the user plays the video by other means (such as downloading), the patch cannot be displayed. According to the second scheme, if the professional patch production company is used for producing the added patches in a personalized way according to video content through a video synthesis technology, huge manpower and material resource costs are required; if a unified patch is used, or only simple fixed information such as author, title, etc. is shown, the association with the video content is weak, and the personalization is lacking, which is unattractive and affects the effect of adding patches to the video.
As can be seen, there is currently a lack of automated, configurable patch video generation schemes based on video features.
Disclosure of Invention
The application provides a video patch generation method, a device, equipment and a storage medium.
According to an aspect of the present application, there is provided a video patch generating method, including:
extracting video characteristics of the video;
determining an optimal paster strategy of a video according to video features, wherein the optimal paster strategy is a paster strategy with the highest priority in paster strategies corresponding to the video features;
and generating a patch by utilizing an optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch.
According to another aspect of the present application, there is provided an apparatus for generating a video patch, including:
the video understanding system is used for extracting video characteristics of the video;
the system comprises a patch synthesis system, a video processing system and a video processing system, wherein the patch synthesis system is used for determining an optimal patch strategy of a video according to video characteristics, and the optimal patch strategy is a patch strategy with the highest priority in patch strategies corresponding to the video characteristics; and generating a patch by utilizing an optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch.
According to another aspect of the present application, there is provided an electronic apparatus, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the above.
According to another aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the above.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
According to the method, the video characteristics of the video are automatically extracted, the optimal paster strategy is determined according to the video characteristics, the paster is generated and synthesized by utilizing the optimal paster strategy, and an automatic configurable paster video generation scheme is provided.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
fig. 1 is a flowchart of an implementation of a video patch generating method 100 according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of the present application for generating and synthesizing patches;
fig. 3 is a schematic structural diagram of a video patch generating apparatus according to an embodiment of the present application;
fig. 4 is a schematic diagram of a video patch generating apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device used to implement the video patch generation method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
An embodiment of the present application proposes a method for generating a video patch, and fig. 1 is a flowchart of implementation of a method 100 for generating a video patch according to an embodiment of the present application, including:
step S101: extracting video characteristics of the video;
step S102: determining an optimal paster strategy of the video according to the video characteristics, wherein the optimal paster strategy is the paster strategy with the highest priority in the paster strategies corresponding to the video characteristics;
step S103: and generating a patch by utilizing the optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch.
Alternatively, the video may be a short video. Short video, also called short video, is a way of disseminating internet content, typically video with a short duration that is disseminated over the internet's new media. With the popularity of intelligent mobile terminals and networks and the speed-up of networks, the application of a short video, i.e. a short and fast high-traffic propagation mode, is becoming widespread. The embodiment of the application can be applied to generating and synthesizing patches for short videos, and can also be applied to generating and synthesizing patches for traditional videos.
The method provided by the embodiment of the application can be implemented by adopting a pre-designed video patch generating device, and specifically, the core of the video patch generating device can comprise the following two parts:
the first part, the video understanding system. The video understanding system may be comprised of a plurality of pluggable video feature extraction modules. Each video feature extraction module is used for extracting video features, different video understanding algorithms are integrated according to different service requirements, and the video feature extraction module can extract video features such as actor information, video classification, video associated film and television works and the like and provide the extracted video features for a second part of the device, namely a patch synthesis system.
And a second part, a patch synthesis system. The patch synthesis system can be composed of three modules, namely a patch rule engine module, a patch generation module and a patch synthesis module.
(1) Rules may be preconfigured in the patch rules engine module, which may include correspondence of different video features to patch policies. The patch policy may include at least one of information of a manner of generating the patch, a manner of composing the video with the patch, and the like. Each tile policy may correspond to a priority, and if the video feature of a video corresponds to a plurality of tile policies, a tile policy with the highest priority (e.g., referred to as an optimal tile policy) may be selected from the tile policies, so as to generate a tile corresponding to the video, and/or synthesize the video with the tile, so as to avoid a problem caused by a conflict of the plurality of tile policies of the video. The video features may not be in a one-to-one correspondence with the tile policies, e.g., one video feature may correspond to multiple tile policies, multiple tile policies may correspond to the same video feature, etc. The pre-configured rule specifies the video characteristics and the corresponding patch strategies, and if the patches of a certain video are desired to be modified, the video characteristics of the video can be modified to correspond to the patch strategies, so that an automatic and configurable patch video generation scheme according to the video characteristics can be realized. In addition, the embodiment of the application can realize the modification of the patch video generation scheme by modifying the preconfigured rule, so that the patch video generation scheme is easy to update and modify.
The patch rule engine module searches the rule according to the video features extracted by the video understanding system, determines at least one patch strategy corresponding to the video features, and outputs the optimal patch strategy corresponding to the video features after conflict is resolved. The conflict resolution method may be: and under the condition that the video features correspond to a plurality of patch strategies, determining the patch strategy with the highest priority as the optimal patch strategy of the video. For example, a video corresponds to 5 tile policies, namely tile policy a, tile policy B, tile policy C, tile policy D, and tile policy E. The priority of each patch strategy is 1, 3, 2 and 5 respectively, wherein the priority 5 is the highest level, and the priority 1 is the lowest level. According to the method, the priority of the paster strategy E is highest in 5 paster strategies corresponding to the video, namely the paster strategy E is the optimal paster strategy of the video, and the paster strategy E can be adopted to generate the paster of the video and synthesize the video with the paster.
Optionally, the optimal patch policy includes at least one of a patch position, a patch size, a patch initiation time, a patch duration, and a patch generation rule. And, the patch rule engine module may store the relationship information of the patch and the video in the video library, where the relationship information may include at least one of a patch position, a patch size, a patch initial time, and a patch duration. And when the video and the corresponding patch are synthesized later, the video and the relation information of the video and the patch can be extracted from the video library, and the video and the patch are synthesized according to the relation information.
(2) The patch generation module may obtain patch materials (possibly including author head portrait, nickname, associated movie and television title, cover, fixed picture, video, audio material, etc.) from the patch material library according to the optimal patch policy determined by the patch rule engine module, using the fast forward Mpeg
(FFmpeg, fast Forward Mpeg) tools, php GD libraries, etc., to synthesize patch material into a complete patch picture and/or patch video associated with the video feature, and to save the synthesized patch to the patch library for use in subsequent processes. After receiving the optimal patch strategy determined by the patch rule engine module, the patch generation module can search a patch library first, and if patches corresponding to the video features already exist in the patch library, the patch is not required to be repeatedly generated, and only the patch synthesis module is required to be instructed to synthesize the patches with the video.
(3) The patch synthesis module can be used for acquiring a source video from a video library, acquiring a corresponding patch from the patch library, synthesizing the patch-carrying video with the specified duration and the specified effect at the specified position and the specified time of the video according to the relation information of the patch and the video stored in the video library by using the FFmpeg tool, and storing the synthesized patch-carrying video into the video library for display and use at each end.
The video patch generating method can be executed by adopting the video patch generating device.
Optionally, the extracting the video features of the video includes: and extracting at least one video characteristic of the video by adopting a corresponding video understanding algorithm according to different service requirements.
According to the embodiment of the application, each video feature extraction module in the video understanding system can be adopted to extract video features of videos respectively.
Optionally, the determining the optimal paster strategy of the video according to the video features includes:
searching a preset rule according to the video features, and acquiring at least one patch strategy corresponding to the video features;
and determining the patch strategy with the highest priority from the at least one patch strategy, and determining the patch strategy with the highest priority as the optimal patch strategy of the video.
The embodiment of the application can adopt the rule engine module in the patch synthesis system to determine the optimal patch strategy of the video.
Optionally, the optimal patch policy includes at least one of a patch position, a patch size, a patch initiation time, a patch duration, and a patch generation rule.
Optionally, generating a patch by using the optimal patch strategy of the video, and synthesizing the video and the patch into a video with a patch includes:
obtaining a patch material by using a patch generation rule, and generating a patch by using the obtained patch material;
and synthesizing the video and the patch into a video with the patch by using at least one of the patch position, the patch size, the patch initial time and the patch duration.
The patch generating module in the patch synthesis system can be adopted to acquire patch materials and generate patches.
The patch synthesis module in the patch synthesis system can be adopted to synthesize the patches generated by the patch generation module with the corresponding videos, so that the videos with patches are generated.
Optionally, the method further comprises:
and storing the relation information of the patch and the video, wherein the relation information comprises at least one of patch position, patch size, patch initial time and patch duration.
According to the embodiment of the application, the rule engine module in the patch synthesis system can be adopted to store the relation information of the patches and the videos into the video library.
The video patch generation method according to the embodiment of the present application will be described below by taking a specific flow of generating patches for a video and synthesizing the video after the video is uploaded by a user as an example. Fig. 2 is a schematic diagram of an embodiment of the present application for generating and synthesizing patches.
As shown in fig. 2, after the user uploads the video, the video features are extracted by the video understanding system, and the video features and the video information are stored in the video library.
And searching a preconfigured rule by using the video characteristics through a patch rule engine module, acquiring an optimal patch strategy, providing the optimal patch strategy for a patch generating module, and storing the relation information of the patch and the video in a video library. Wherein the relationship information includes at least one of patch position, patch size, patch initiation time, and patch duration. The patch rule engine module invokes the patch generation module.
And the patch generating module synthesizes corresponding materials in the patch material library into various types of patches such as a picture patch, a video patch and the like according to an optimal patch strategy, and stores the patches into the patch library.
And then calling a patch synthesis module to synthesize the corresponding video and the patches into a video with patches, and storing the video with patches into a video library for display at each end.
The embodiment of the application further provides a device for generating a video patch, and fig. 3 is a schematic structural diagram of a device for generating a video patch according to the embodiment of the application, including:
a video understanding system 310 for extracting video features of a video;
the patch synthesis system 320 is configured to determine an optimal patch policy of the video according to the video feature, where the optimal patch policy is a patch policy with a highest priority among patch policies corresponding to the video feature; and generating a patch by utilizing an optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch.
Fig. 4 is a schematic diagram of a video patch generating apparatus according to an embodiment of the present application. As shown in fig. 4, the video understanding system 310 optionally includes at least one video feature extraction module 311, which extracts at least one video feature of the video using a corresponding video understanding algorithm according to different service requirements.
Optionally, the patch synthesis system 320 includes a patch rule engine module 321;
the patch rule engine module 321 is configured to search a preconfigured rule according to the video feature, and obtain at least one patch policy corresponding to the video feature; and determining the patch strategy with the highest priority from the at least one patch strategy, and determining the patch strategy with the highest priority as the optimal patch strategy of the video.
Optionally, the optimal patch policy includes at least one of a patch position, a patch size, a patch initiation time, a patch duration, and a patch generation rule.
Optionally, the patch synthesis system 320 further includes a patch generation module 322 and a patch synthesis module 323:
a patch generating module 322, configured to acquire patch materials using a patch generating rule, and generate patches using the acquired patch materials;
the patch synthesis module 323 is configured to synthesize the video and the patch into a video with patch by using at least one of a patch position, a patch size, a patch initial time and a patch duration.
Optionally, the patch rule engine module 321 is further configured to store relationship information between the patch and the video, where the relationship information includes at least one of a patch position, a patch size, a patch initial time, and a patch duration.
The functions of each module in each apparatus of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, which are not described herein again.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
As shown in fig. 4, a block diagram of an electronic device according to a video patch generating method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 4, the electronic device includes: one or more processors 401, memory 502, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 501 is illustrated in fig. 5.
Memory 502 is a non-transitory computer readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the video patch generation method provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the video patch generation method provided by the present application.
The memory 502, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the video understanding system 310 and the tile composition system 320 shown in fig. 3) corresponding to the video tile generation method in the embodiments of the present application. The processor 501 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 502, that is, implements the video patch generation method in the above-described method embodiments.
Memory 502 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the use of the electronic device generated from the video patch, and the like. In addition, memory 502 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to the video patch generating electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the video patch generating method may further include: an input device 503 and an output device 504. The processor 501, memory 502, input devices 503 and output devices 504 may be connected by a bus or otherwise, for example in fig. 5.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device generated by the video patch, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (14)

1. A video patch generation method, comprising:
extracting video characteristics of the video;
determining an optimal paster strategy of the video according to the video characteristics, wherein the optimal paster strategy is a paster strategy with the highest priority in paster strategies corresponding to the video characteristics;
generating a patch by utilizing the optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch;
the determining the optimal paster strategy of the video according to the video features comprises the following steps:
searching a preconfigured rule according to the video features, and acquiring at least one patch strategy corresponding to the video features;
the optimal patch strategy comprises a patch generation rule; the generating a patch using the optimal patch strategy of the video includes:
acquiring a patch material by using the patch generation rule, and generating a patch associated with the video feature by using the acquired patch material;
alternatively, if the patch associated with the video feature already exists, the patch associated with the video feature is acquired.
2. The method of claim 1, wherein the extracting video features of the video comprises: and extracting at least one video characteristic of the video by adopting a corresponding video understanding algorithm according to different service requirements.
3. The method according to claim 1 or 2, said method further comprising, after said searching for pre-configured rules from said video features, obtaining at least one patch policy corresponding to said video features:
and determining a patch strategy with the highest priority from the at least one patch strategy, and determining the patch strategy with the highest priority as the optimal patch strategy of the video.
4. The method of claim 1 or 2, wherein the optimal patch strategy further comprises at least one of patch location, patch size, patch initiation time, and patch duration.
5. The method of claim 4, wherein the synthesizing the video with the tile into a tiled video comprises:
and synthesizing the video and the patch into a video with the patch by using at least one of the patch position, the patch size, the patch initial time and the patch duration.
6. The method of claim 4, further comprising:
and storing the relation information of the patch and the video, wherein the relation information comprises at least one of patch position, patch size, patch initial time and patch duration.
7. An apparatus for video patch generation, comprising:
the video understanding system is used for extracting video characteristics of the video;
the patch synthesis system is used for determining an optimal patch strategy of the video according to the video characteristics, wherein the optimal patch strategy is a patch strategy with the highest priority in patch strategies corresponding to the video characteristics; generating a patch by utilizing the optimal patch strategy of the video, and synthesizing the video and the patch into a video with the patch;
the patch synthesis system comprises a patch rule engine module;
the patch rule engine module is used for searching a preconfigured rule according to the video feature and acquiring at least one patch strategy corresponding to the video feature;
the optimal patch strategy comprises a patch generation rule; the patch synthesis system further comprises a patch generation module;
the patch generation module is used for acquiring patch materials by utilizing the patch generation rule and generating patches associated with video features by adopting the acquired patch materials; alternatively, if the patch associated with the video feature already exists, the patch associated with the video feature is acquired.
8. The apparatus of claim 7, wherein the video understanding system comprises at least one video feature extraction module;
the video feature extraction module is used for extracting at least one video feature of the video by adopting a corresponding video understanding algorithm according to different service requirements.
9. The device according to claim 7 or 8, wherein,
and the patch rule engine module is further configured to determine a patch policy with a highest priority from at least one patch policy after searching a preconfigured rule according to the video feature and acquiring at least one patch policy corresponding to the video feature, and determine the patch policy with the highest priority as an optimal patch policy of the video.
10. The apparatus of claim 7 or 8, wherein the optimal patch strategy further comprises at least one of patch location, patch size, patch initiation time, and patch duration.
11. The apparatus of claim 10, wherein the patch synthesis system further comprises a patch synthesis module:
and the patch synthesis module is used for synthesizing the video and the patch into a video with the patch by utilizing at least one of the patch position, the patch size, the patch initial time and the patch duration.
12. The apparatus of claim 10, the patch rules engine module further to save relationship information of the patch to the video; the relationship information includes at least one of patch position, patch size, patch initiation time, and patch duration.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202011105779.5A 2020-10-15 2020-10-15 Video patch generation method and device, electronic equipment and storage medium Active CN112214643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011105779.5A CN112214643B (en) 2020-10-15 2020-10-15 Video patch generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011105779.5A CN112214643B (en) 2020-10-15 2020-10-15 Video patch generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112214643A CN112214643A (en) 2021-01-12
CN112214643B true CN112214643B (en) 2024-01-12

Family

ID=74054810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011105779.5A Active CN112214643B (en) 2020-10-15 2020-10-15 Video patch generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112214643B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022353A (en) * 2006-10-10 2007-08-22 鲍东山 Directional stream media advertisement insert-cut system
CN101087388A (en) * 2007-06-19 2007-12-12 腾讯科技(深圳)有限公司 A playing method and system of chip data
CN101330517A (en) * 2008-08-04 2008-12-24 上海维播信息技术有限公司 Method for distributing interactive paster advertisement
CN102436483A (en) * 2011-10-31 2012-05-02 北京交通大学 Video advertisement detecting method based on explicit type sharing subspace
CN102857795A (en) * 2012-08-29 2013-01-02 四三九九网络股份有限公司 Method for adding dynamic cinema advertisement to video player
WO2014000515A1 (en) * 2012-06-26 2014-01-03 天脉聚源(北京)传媒科技有限公司 Advertisement video detection method
CN104363484A (en) * 2014-12-01 2015-02-18 北京奇艺世纪科技有限公司 Advertisement pushing method and device based on video picture
CN104661048A (en) * 2013-11-21 2015-05-27 乐视网信息技术(北京)股份有限公司 Method and device for controlling network multimedia resources
CN104766229A (en) * 2015-04-22 2015-07-08 合一信息技术(北京)有限公司 Implantable advertisement putting method
CN104811744A (en) * 2015-04-27 2015-07-29 北京视博云科技有限公司 Information putting method and system
CN104883610A (en) * 2015-04-28 2015-09-02 腾讯科技(北京)有限公司 Patch video playing method and device
CN105141987A (en) * 2015-08-14 2015-12-09 京东方科技集团股份有限公司 Advertisement implanting method and advertisement implanting system
CN105338404A (en) * 2015-10-29 2016-02-17 北京击壤科技有限公司 Television program embedded soft advertising identification system and method thereof
CN105933776A (en) * 2016-06-12 2016-09-07 腾讯科技(北京)有限公司 Method and device for playing attached media file
CN108108996A (en) * 2017-11-29 2018-06-01 北京百度网讯科技有限公司 Advertisement placement method, device, computer equipment and readable medium in video
CN109831684A (en) * 2019-03-11 2019-05-31 深圳前海微众银行股份有限公司 Video optimized recommended method, device and readable storage medium storing program for executing
CN109996107A (en) * 2017-12-29 2019-07-09 百度在线网络技术(北京)有限公司 Video generation method, device and system
CN110163640A (en) * 2018-02-12 2019-08-23 华为技术有限公司 A kind of method and computer equipment of product placement in video
CN111327968A (en) * 2020-02-27 2020-06-23 北京百度网讯科技有限公司 Short video generation method, short video generation platform, electronic equipment and storage medium
CN111461772A (en) * 2020-03-27 2020-07-28 上海大学 Video advertisement integration system and method based on generation countermeasure network

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022353A (en) * 2006-10-10 2007-08-22 鲍东山 Directional stream media advertisement insert-cut system
CN101087388A (en) * 2007-06-19 2007-12-12 腾讯科技(深圳)有限公司 A playing method and system of chip data
CN101330517A (en) * 2008-08-04 2008-12-24 上海维播信息技术有限公司 Method for distributing interactive paster advertisement
CN102436483A (en) * 2011-10-31 2012-05-02 北京交通大学 Video advertisement detecting method based on explicit type sharing subspace
WO2014000515A1 (en) * 2012-06-26 2014-01-03 天脉聚源(北京)传媒科技有限公司 Advertisement video detection method
CN102857795A (en) * 2012-08-29 2013-01-02 四三九九网络股份有限公司 Method for adding dynamic cinema advertisement to video player
CN104661048A (en) * 2013-11-21 2015-05-27 乐视网信息技术(北京)股份有限公司 Method and device for controlling network multimedia resources
CN104363484A (en) * 2014-12-01 2015-02-18 北京奇艺世纪科技有限公司 Advertisement pushing method and device based on video picture
CN104766229A (en) * 2015-04-22 2015-07-08 合一信息技术(北京)有限公司 Implantable advertisement putting method
CN104811744A (en) * 2015-04-27 2015-07-29 北京视博云科技有限公司 Information putting method and system
CN104883610A (en) * 2015-04-28 2015-09-02 腾讯科技(北京)有限公司 Patch video playing method and device
CN105141987A (en) * 2015-08-14 2015-12-09 京东方科技集团股份有限公司 Advertisement implanting method and advertisement implanting system
CN105338404A (en) * 2015-10-29 2016-02-17 北京击壤科技有限公司 Television program embedded soft advertising identification system and method thereof
CN105933776A (en) * 2016-06-12 2016-09-07 腾讯科技(北京)有限公司 Method and device for playing attached media file
CN108108996A (en) * 2017-11-29 2018-06-01 北京百度网讯科技有限公司 Advertisement placement method, device, computer equipment and readable medium in video
CN109996107A (en) * 2017-12-29 2019-07-09 百度在线网络技术(北京)有限公司 Video generation method, device and system
CN110163640A (en) * 2018-02-12 2019-08-23 华为技术有限公司 A kind of method and computer equipment of product placement in video
CN109831684A (en) * 2019-03-11 2019-05-31 深圳前海微众银行股份有限公司 Video optimized recommended method, device and readable storage medium storing program for executing
CN111327968A (en) * 2020-02-27 2020-06-23 北京百度网讯科技有限公司 Short video generation method, short video generation platform, electronic equipment and storage medium
CN111461772A (en) * 2020-03-27 2020-07-28 上海大学 Video advertisement integration system and method based on generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Videosense:A contextual in-video advertising system;Mei Tao 等;Circuits and Systems for Video Technology》;第19卷(第12期);1866-1879 *
基于细粒度标签的在线视频 广告投放机制研究;陆枫 等;《计算机研究与发展》;第51卷(第12期);2733-2745 *

Also Published As

Publication number Publication date
CN112214643A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
US20210321157A1 (en) Special effect processing method and apparatus for live broadcasting, and server
US11417341B2 (en) Method and system for processing comment information
US20180213289A1 (en) Method of authorizing video scene and metadata
KR102059428B1 (en) How to display content viewing devices and their content viewing options
WO2019105467A1 (en) Method and device for sharing information, storage medium, and electronic device
CN112233210A (en) Method, device, equipment and computer storage medium for generating virtual character video
WO2017080200A1 (en) Custom menu implementation method and apparatus, client and server
US20150143210A1 (en) Content Stitching Templates
CN110557699B (en) Intelligent sound box interaction method, device, equipment and storage medium
WO2020220773A1 (en) Method and apparatus for displaying picture preview information, electronic device and computer-readable storage medium
JP7240505B2 (en) Voice packet recommendation method, device, electronic device and program
CN110888635A (en) Same-layer rendering method and device, electronic equipment and storage medium
CN112825013A (en) Control method and device of terminal equipment
CN112015927B (en) Method and device for editing multimedia file, electronic equipment and storage medium
CN112015468B (en) Interface document processing method and device, electronic equipment and storage medium
CN113365010B (en) Volume adjusting method, device, equipment and storage medium
CN110636338A (en) Video definition switching method and device, electronic equipment and storage medium
CN108876866B (en) Media data processing method, device and storage medium
CN112214643B (en) Video patch generation method and device, electronic equipment and storage medium
CA3118140C (en) Video playback in an online streaming environment
WO2023207981A1 (en) Configuration file generation method, apparatus, electronic device, medium and program product
WO2023179539A1 (en) Video editing method and apparatus, and electronic device
US20210392394A1 (en) Method and apparatus for processing video, electronic device and storage medium
US20230209003A1 (en) Virtual production sets for video content creation
AU2020288833B2 (en) Techniques for text rendering using font patching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant