CN111031366B - Method and system for implanting advertisement in video - Google Patents

Method and system for implanting advertisement in video Download PDF

Info

Publication number
CN111031366B
CN111031366B CN201911206343.2A CN201911206343A CN111031366B CN 111031366 B CN111031366 B CN 111031366B CN 201911206343 A CN201911206343 A CN 201911206343A CN 111031366 B CN111031366 B CN 111031366B
Authority
CN
China
Prior art keywords
video
image
fusion
target
countermeasure network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911206343.2A
Other languages
Chinese (zh)
Other versions
CN111031366A (en
Inventor
石磊
袁春
李焰卓
张伟文
舒东树
丘龙杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhangzhong Information Technology Co ltd
Original Assignee
Shenzhen Zhangzhong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhangzhong Information Technology Co ltd filed Critical Shenzhen Zhangzhong Information Technology Co ltd
Priority to CN201911206343.2A priority Critical patent/CN111031366B/en
Publication of CN111031366A publication Critical patent/CN111031366A/en
Application granted granted Critical
Publication of CN111031366B publication Critical patent/CN111031366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0252Targeted advertisements based on events or environment, e.g. weather or festivals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Environmental & Geological Engineering (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to the field of advertisements, in particular to a method and a system for implanting advertisements in videos. By implementing the invention, the advertisement can be implanted into the finished video or replaced without timeliness, so that the advertisement resource is well utilized, the situation of the viewer is better met, and the acceptance of the viewer to the advertisement content is improved.

Description

Method and system for implanting advertisement in video
Technical Field
The invention relates to the field of advertisements, in particular to a method and a system for implanting advertisements in videos.
Background
The current cultural entertainment industry is entering the "happy era" centered on self-channel. Video has become a new trend of video websites, and videos, especially short videos, occupy an increasingly higher proportion of cultural entertainment videos.
Abundant advertisement resources are reserved in the videos. There are two main types of advertising formats currently employed in video: (1) inserting advertisements before playing videos; (2) the advertisement is embedded in the homemade video by the producer at the time of production. The advertising revenue of the 1 st advertising form is generally obtained directly by a video operator, and the advertising revenue of the 2 nd advertising form is generally obtained by a video producer. The advertisement format 1 can change the advertisement objects and contents at different times and under different user situations, but the viewers of the video generally feel the advertisement format rather repugnant; the second category has a problem that the advertisement target and the target content cannot be replaced after the video production is completed, although the viewer's acceptance is high.
Notably, however, advertisements are time-sensitive and contextual. Therefore, a large number of videos can be streamed for a long time and have popularity, and the timeliness and the situation of the general early-stage implanted advertisements are lost; in addition, a large number of videos cannot predict the value of advertisement investment in the early stage, so that no advertisement is implanted, and the value of advertisement investment in the later stage is not implanted into the video operators and is a waste of resources along with the large number of forwarding and watching.
Disclosure of Invention
The technical problem to be solved by the invention is how to enhance the substitution feeling of consumers to improve the commodity transaction rate.
To this end, according to a first aspect, an embodiment of the present invention discloses a method for implanting advertisements in videos, including the steps of:
acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set; searching out a target video with an implantable target image and an implantation position of the target image in the target video in the video set by utilizing same-class search or image semantic search according to the image set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video; and performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output qualified video or give a warning.
Further, training a convolutional neural network capable of identifying images of the same type; searching the similar objects of the advertising targets in the target video by using the convolutional neural network, and identifying the frame number and the position of the similar objects in the target video to acquire the implantation position.
Further, when the implantation position cannot be found through the same-class search, the image semantic search is adopted.
Further, the image semantic search specifically includes: adopting image semantics to segment a target video image in the target video and marking the target video image semantics; and acquiring the object image semantics of the object image, and searching the video image semantics having semantic relation with the object image semantics through a pre-established semantic relation table to acquire the implantation position.
Further, fusing the target image to the implantation position through generating a countermeasure network to generate a fused video, specifically including: limiting the generation of a countermeasure network by taking the target image as a condition, and obtaining a conditional production countermeasure network; and according to the frame number and the position in the target video, utilizing the condition generation countermeasure network to fuse the target image to the implantation position to generate a fused video.
Further, the condition generates a loss function against the network as:
L=λ1{Ey[logD(y)]+Ex,z[log(1-D(x,z))]}+λ2||x-y(x)||1
wherein x is a target image, z is random noise, y is an image generated by a conditional generation countermeasure network, y (x) is an image representing an implantation position extracted from the y image, D is a fusion evaluation function, and lambda1And λ2Are proportional parameters.
Further, performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output a qualified video or a highlight warning specifically includes: judging the synthesis degree, semantic association degree and consistency of the single-frame image and the series of images to respectively generate fusion scores; directly outputting qualified videos or proposing warnings according to the single item fusion scores; or performing fusion evaluation after performing secondary fusion on the fusion video according to the fusion score to output qualified video or propose warning.
Further, the fusion score comprises a single fusion score and a composite fusion score: carrying out linear weighting on the three unidirectional fusion scores to generate a comprehensive fusion score; and directly outputting qualified videos or proposing warnings according to the comprehensive fusion score.
Further, the target image is images of different viewing angles of a plurality of advertisement targets.
According to a second aspect, an embodiment of the present invention provides a system for implanting advertisements in videos, including:
a collection construction module: acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set;
an image fusion module: searching out a target video with an implantable target image and an implantation position of the target image in the target video by utilizing the same type search or the image semantic search according to the image set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video;
a fusion evaluation module: and performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output qualified video or give a warning.
The invention has the beneficial effects that:
the embodiment of the invention discloses a method and a system for implanting advertisements in videos. By implementing the invention, the advertisement can be implanted into the finished video or replaced without timeliness, so that the advertisement resource is well utilized, the situation of the viewer is better met, and the acceptance of the viewer to the advertisement content is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for placing advertisements in a video according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a system for implanting an advertisement in a video according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment automatically searches the video images in the video which is finished to be manufactured, obtains the similar objects of the same type with the advertising targets or the positions where the advertising targets can be implanted in the image semantics, fuses the advertising target images into the target video images frame by frame through a conditional generation countermeasure network to obtain a fused video, and finally outputs the qualified video or provides warning by carrying out fusion evaluation on the single-frame images and the series images of the fused video.
Referring to fig. 1, a flowchart of a method for implanting an advertisement in a video according to the present embodiment is disclosed, which includes the steps of:
and S100, acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set.
Specifically, for an advertisement target to be embedded into a video, multiple target images of the advertisement target at different viewing angles need to be prepared first, and the multiple target images need to be capable of clearly displaying characteristic information of the advertisement target, wherein the characteristic information includes a trademark, a brand, an appearance of the advertisement target recognized by an ordinary consumer, and the like. In the present embodiment, a specific example is given by advertising that mineral water is targeted, and 5 front photographs of different elevation angles are taken as targeted images, and an image set of mineral water is formed. And recording various videos to be embedded with the authority of embedding advertisements, and forming a corresponding video set to be embedded.
And S200, searching out a target video with an implantable target image and an implantation position of the target image in the target video in the video set by utilizing the same type search or the image semantic search according to the image set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video.
Specifically, the video set formed in step S100 is searched for a target video by using homogeneous search or image semantic search, wherein the target video is searched for the image set, and the target video is a video set obtained after searching by two methods, so as to obtain a plurality of implantable videos for selection. In this embodiment, it is necessary to search for a target video in which a mineral water advertisement can be embedded among videos to be embedded. After the target video is acquired, it is required to confirm at what position in the target video the target image is fused, that is, the implantation position is acquired. The implant location search can be performed by two methods in the present embodiment.
The first method is to search for objects, i.e., similar objects, in the same category as the advertisement target in the acquired target video by searching for the same category, and if the objects can be searched, mark the frame number and the position of the similar objects in the target video. Wherein, the position is expressed as (x, y, w, h) by rectangular coordinates, (x, y) is the two-dimensional coordinates of the upper left corner of the object in the target video image, and (w, h) is the width and height of the advertising target in the video image. In the embodiment, a frame with mineral water and a position thereof in a video image need to be searched, and as the mineral water has various brands, a convolutional neural network which can identify images of the same type needs to be trained in advance; searching targeted mineral water advertising the targeted video by using the convolutional neural network, and identifying the frame number and the position of the mineral water in the targeted video to acquire an implantation position. The convolutional neural network is a classifier for mineral water, can search a target video in a targeted manner, and the number of frames and the position of the same kind of object searched in the target video are implantation positions. Different articles can be identified through different training, even thousands of articles can be identified through the trained convolutional neural network at present, and the function is very powerful; therefore, the training of the identification capability of the single commodity is easier and more free.
The second method is image semantic search, that is, searching the semantic position capable of accommodating the advertisement target in the acquired target video. Before image semantic search, a semantic relation table is required to be established in advance, and the position relation between the advertisement target and other objects is stored in the semantic relation table. Preferably, the semantic relation table can be established manually or obtained by a machine learning method from a large number of images. Then, performing image semantic segmentation on the target video image in the target video, performing video image semantic identification on each segmented image block, finding out other articles with position relation with the advertisement target according to the advertisement target, and finally determining the implantation position by searching the frame number and the position of the other articles in the video in the target video. Preferably, the image semantic search is adopted to search the implantation position in the case that the implantation position is not found through the congeneric search. In the example of mineral water ad placement, semantic relationships may be established as: "mineral water is on table", "mineral water is in hand", "mineral water is on shelf", etc. The position of the table, the position of the hand and the position of the goods shelf in the image can be still output to implant the image of the mineral water, and then the implantation of the advertisement is completed. Preferably, an image semantic segmentation deep neural network is trained to complete segmentation and semantic labeling of image semantics. In this embodiment, an advertisement image data set is collected and subjected to manual Semantic Segmentation, and then a deep neural Network in BiSeNet for Real-time Semantic Segmentation is retrained to complete Segmentation and Semantic labeling of image semantics.
Further, a fusion video is generated by generating a confrontation network to fuse to the implantation position after the implantation position is found. Specifically, the generation countermeasure Network restricts the generation countermeasure Network by taking the target image as a condition, and obtains a Conditional production countermeasure Network (Conditional production adaptive Network) to ensure that the category, brand, and main appearance feature of the advertisement target contained in the generated image are not changed. The fusion of the target images needs to make fusion attempts respectively for the images of multiple viewing angles in the image set, and output the fused M result images respectively to obtain multiple fusion videos.
The condition generating penalty function against the network is:
L=λ1{Ey[logD(y)]+Ex,z[log(1-D(x,z))]}+λ2||x-y(x)||1
wherein x is a target image, z is random noise, y is an image generated by a conditional generation countermeasure network, y (x) is an image representing an implantation position extracted from the y image, D is a fusion evaluation function, and lambda1And λ2Are proportional parameters. The loss function is an objective function of machine learning, is customarily called as the loss function in a neural network, and the objective of the machine learning is to minimize the loss function.
And 300, performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output a qualified video or give a warning.
In the present embodiment, single frame quality evaluation and sequential image quality evaluation are performed on the quality of the fusion video image after the target image is implanted. If the fused video image after fusing the target image is judged to have the disagreement, for example, the pasting trace or the semantic error exists, the frame with the disagreement, the pasting trace or the semantic error is marked, and the step S300 is carried out to re-fuse the frame with the disagreement, the pasting trace or the semantic error; if there is still a problem after N times of fusion, a warning is provided for human adjustment, where N is a set value. Qualified videos can be output without problems, and the output of the qualified videos can be after sequential fusion evaluation or after repeated evaluation and re-fusion. In the embodiment, when the quality of the mineral water image fused in the video is judged, the calculation amount and the success rate of production are comprehensively considered, and the value of N is 5-10, preferably 7.
Wherein the fusion quality is evaluated at leastBy three discriminators, where: first discriminator
Figure GDA0003369999960000061
Judging whether a single frame image and a series of images of a fusion target image are synthesized, wherein the output of the single frame image and the series of images is a fraction expressed by a decimal number of 0-1, and the higher the fraction is, the higher the possibility that the fused image is synthesized is; second discriminator
Figure GDA0003369999960000062
Judging whether the single-frame image and the series of images of the fusion target image have semantic errors or not, wherein the output of the single-frame image and the series of images is a score expressed by a decimal number of 0-1, and the higher the score is, the higher the possibility that the image has semantics is; third discriminator
Figure GDA0003369999960000063
Whether the single frame image of the fusion target image and the advertisement target in the series image have the inconsistency is judged, the output is a fraction expressed by a decimal number of 0-1, and the larger the fraction is, the more serious the inconsistency exists among frames. After sequentially judging, the three judgers respectively give out a single item fusion score, and then the single item fusion score is subjected to linear weighting and is used as a comprehensive fusion score, namely the comprehensive fusion score
Figure GDA0003369999960000064
Where y is the image generated, α1、α2、α3Is a weighted proportion. In this embodiment, since inter-frame transition most easily causes discomfort to the viewer, α is set to be larger1Taken as 1, alpha2Taken as 1, alpha3Take 2. The single fusion score and the fusion evaluation score are input into step S300 to guide the generation, re-fusion and evaluation of the fusion video again until a qualified video is output or a warning is given. When the integrated fusion evaluation score is smaller than the preset threshold, the integrated fusion evaluation score is considered to be qualified, and the threshold is 0.6 in the embodiment. If there are consecutive M frames eligible, the true video is considered eligible, in an embodiment M is taken to be 150 (which can last approximately 5 seconds).
Further, if the integrated fusion score is not qualified, the loss function of the conditional generation countermeasure network in S300 is refined as:
Figure GDA0003369999960000065
wherein λ is1、λ2、λ3Is a proportional parameter, and in this embodiment, the values are 1, and 0.01, respectively.
Compared with the prior art that the advertisement cannot be updated or implanted in the finished video, the scheme disclosed by the embodiment of the invention can obtain the implantation position by matching the finished videos and searching by the same category or image semantics, then fuse the target images to the implantation position by utilizing the generated countermeasure network to generate the fused video, and finally obtain the qualified video or provide warning after fusion evaluation is carried out on the fused video. According to the scheme, after the video is manufactured, the advertisements can be implanted into the videos without the advertisements or the advertisements implanted into the videos can be replaced for the second time. The secondary generation of the ad-in-place of the video may cause the ad-in-place in the video to regain timeliness and to conform to the viewer's context. In addition, the implantation and the income of the advertisement can be grasped and controlled by a video operator by implanted advertisement secondary generation, and the income of the advertisement is easier to be maximized. Wherein, the position where the advertisement target can be implanted is searched by adopting two conditions of similar objects and semantic matching, so that the possibility of implanting the advertisement in the video is greatly improved; the conditional generation countermeasure network and the quality evaluation module are adopted, so that the generated video image is vivid and has no sense of incongruity.
The present embodiment also discloses a system for implanting an advertisement in a video, please refer to fig. 2, a schematic structural diagram of a system for implanting an advertisement in a video, the system for implanting an advertisement in a video includes:
the collection construction module 100: acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set;
the image fusion module 200: searching out a target video with an implantable target image and an implantation position of the target image in the target video by utilizing the same type search or the image semantic search according to the image set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video;
the fusion evaluation module 300: and performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output qualified video or give a warning.
The present example may be implemented on hardware including, but not limited to, a smartphone, a tablet, a smart television, a computer, and the like. The units or modules included in this embodiment may be deployed on the same hardware, or may be deployed on multiple hardware and form a complete system through network communication.
The foregoing is merely an example of the present invention and common general knowledge of known specific structures and features of the embodiments is not described herein in any greater detail. It should be noted that variations and modifications can be made by those skilled in the art without departing from the structure of the present invention. These should also be construed as the scope of the present invention, and they should not be construed as affecting the effectiveness of the practice of the present invention or the applicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (8)

1. A method for placing advertisements in a video, comprising the steps of:
acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set;
according to the image set, searching out a target video with an implantable target image and an implantation position of the target image in the target video by utilizing the same-class search or the image semantic search in the video set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video, wherein the fusing of the target image to the implantation position through the generation countermeasure network to generate the fused video comprises the following steps:
limiting generation of a countermeasure network by taking the target image as a condition, and obtaining a condition generation countermeasure network;
according to the frame number and the position in the target video, fusing the target image to the implantation position by utilizing the condition generation countermeasure network to generate a fused video, wherein the loss function of the condition generation countermeasure network is as follows:
L=λ1{Ey[log D(y)]+Ex,z[log(1-D(x,z))]}+λ2||x-y(x)||1
wherein x is a target image, z is random noise, y is an image generated by a conditional generation countermeasure network, y (x) is an image representing an implantation position extracted from the y image, D is a fusion evaluation function, and lambda1And λ2Is a proportional parameter;
and performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output qualified video or give a warning.
2. The method of claim 1, wherein performing a peer search on the target video comprises:
training a convolutional neural network capable of identifying images of the same category;
searching the similar objects of the advertising targets in the target video by using the convolutional neural network, and identifying the frame number and the position of the similar objects in the target video to acquire the implantation position.
3. The method of claim 2, wherein the image semantic search is used when the placement location cannot be found by a homogeneous search.
4. The method for advertising in a video according to any one of claims 1 to 3, wherein the image semantic search specifically comprises:
adopting image semantics to segment a target video image in the target video and marking the target video image semantics;
and acquiring the object image semantics of the object image, and searching the video image semantics having semantic relation with the object image semantics through a pre-established semantic relation table to acquire the implantation position.
5. The method for advertising in video according to claim 1, wherein the fusion evaluation of the single-frame image and the series of images of the fusion target image in the fusion video to output a qualified video or a highlight warning comprises:
judging the synthesis degree, semantic association degree and consistency of the single-frame image and the series of images to respectively generate fusion scores;
directly outputting qualified videos or proposing warnings according to the fusion scores;
or performing fusion evaluation after performing secondary fusion on the fusion video according to the fusion score to output qualified video or propose warning.
6. The method of advertising in a video according to claim 5, wherein the fusion score comprises a single item fusion score and a composite fusion score:
carrying out linear weighting on the three unidirectional fusion scores to generate a comprehensive fusion score;
and directly outputting qualified videos or proposing warnings according to the comprehensive fusion score.
7. The method of claim 1, wherein the target image is a plurality of images of the advertisement target from different viewing angles.
8. A system for placing advertisements in video, comprising:
a collection construction module: acquiring a target image and a video to be implanted, and respectively constructing an image set and a video set;
an image fusion module: according to the image set, searching out a target video with an implantable target image and an implantation position of the target image in the target video by utilizing the same-class search or the image semantic search in the video set, and fusing the target image to the implantation position through a generation countermeasure network to generate a fused video, wherein the fusing of the target image to the implantation position through the generation countermeasure network to generate the fused video comprises the following steps:
limiting generation of a countermeasure network by taking the target image as a condition, and obtaining a condition generation countermeasure network;
according to the frame number and the position in the target video, fusing the target image to the implantation position by utilizing the condition generation countermeasure network to generate a fused video, wherein the loss function of the condition generation countermeasure network is as follows:
L=λ1{Ey[log D(y)]+Ex,z[log(1-D(x,z))]}+λ2||x-y(x)||1
wherein x is a target image, z is random noise, y is an image generated by a conditional generation countermeasure network, y (x) is an image representing an implantation position extracted from the y image, D is a fusion evaluation function, and lambda1And λ2Is a proportional parameter;
a fusion evaluation module: and performing fusion evaluation on the single-frame image and the series of images of the fusion target image in the fusion video to output qualified video or give a warning.
CN201911206343.2A 2019-11-29 2019-11-29 Method and system for implanting advertisement in video Active CN111031366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911206343.2A CN111031366B (en) 2019-11-29 2019-11-29 Method and system for implanting advertisement in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911206343.2A CN111031366B (en) 2019-11-29 2019-11-29 Method and system for implanting advertisement in video

Publications (2)

Publication Number Publication Date
CN111031366A CN111031366A (en) 2020-04-17
CN111031366B true CN111031366B (en) 2021-12-17

Family

ID=70207762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911206343.2A Active CN111031366B (en) 2019-11-29 2019-11-29 Method and system for implanting advertisement in video

Country Status (1)

Country Link
CN (1) CN111031366B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014094497A1 (en) * 2012-12-17 2014-06-26 深圳先进技术研究院 Method and system for adaptive advertising in video content
CN104967885A (en) * 2015-03-27 2015-10-07 哈尔滨工业大学深圳研究生院 Advertisement recommending method and system based on video content
CN106303621A (en) * 2015-06-01 2017-01-04 北京中投视讯文化传媒股份有限公司 The insertion method of a kind of video ads and device
CN107977629A (en) * 2017-12-04 2018-05-01 电子科技大学 A kind of facial image aging synthetic method of feature based separation confrontation network
CN108985229A (en) * 2018-07-17 2018-12-11 北京果盟科技有限公司 A kind of intelligent advertisement replacement method and system based on deep neural network
CN110163640A (en) * 2018-02-12 2019-08-23 华为技术有限公司 A kind of method and computer equipment of product placement in video
CN110210386A (en) * 2019-05-31 2019-09-06 北京市商汤科技开发有限公司 For acting the video generation method migrated and neural network training method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101160363A (en) * 2005-02-23 2008-04-09 西玛耐诺技术以色列有限公司 Ink jet printable compositions for preparing electronic devices and patterns
US20170223423A1 (en) * 2014-08-11 2017-08-03 Browseplay, Inc. System and method for secure cross-platform video transmission

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014094497A1 (en) * 2012-12-17 2014-06-26 深圳先进技术研究院 Method and system for adaptive advertising in video content
CN104967885A (en) * 2015-03-27 2015-10-07 哈尔滨工业大学深圳研究生院 Advertisement recommending method and system based on video content
CN106303621A (en) * 2015-06-01 2017-01-04 北京中投视讯文化传媒股份有限公司 The insertion method of a kind of video ads and device
CN107977629A (en) * 2017-12-04 2018-05-01 电子科技大学 A kind of facial image aging synthetic method of feature based separation confrontation network
CN110163640A (en) * 2018-02-12 2019-08-23 华为技术有限公司 A kind of method and computer equipment of product placement in video
CN108985229A (en) * 2018-07-17 2018-12-11 北京果盟科技有限公司 A kind of intelligent advertisement replacement method and system based on deep neural network
CN110210386A (en) * 2019-05-31 2019-09-06 北京市商汤科技开发有限公司 For acting the video generation method migrated and neural network training method and device

Also Published As

Publication number Publication date
CN111031366A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN110163640B (en) Method for implanting advertisement in video and computer equipment
US11409791B2 (en) Joint heterogeneous language-vision embeddings for video tagging and search
CN109145784B (en) Method and apparatus for processing video
CN107197384B (en) The multi-modal exchange method of virtual robot and system applied to net cast platform
US8750602B2 (en) Method and system for personalized advertisement push based on user interest learning
JP4370387B2 (en) Apparatus and method for generating label object image of video sequence
CN107861972A (en) The method and apparatus of the full result of display of commodity after a kind of user's typing merchandise news
EP2587826A1 (en) Extraction and association method and system for objects of interest in video
CN103929653B (en) Augmented reality video generator, player and its generation method, player method
CN106537390B (en) Identify the presentation style of education video
KR20190011829A (en) Estimating and displaying social interest in time-based media
CN111143617A (en) Automatic generation method and system for picture or video text description
CN109408672A (en) A kind of article generation method, device, server and storage medium
CN111985419B (en) Video processing method and related equipment
CN110879974A (en) Video classification method and device
CN111741327B (en) Media processing method and media server
US20170013309A1 (en) System and method for product placement
US9866894B2 (en) Method for annotating an object in a multimedia asset
EP3396964B1 (en) Dynamic content placement in a still image or a video
CN111031366B (en) Method and system for implanting advertisement in video
CN116980665A (en) Video processing method, device, computer equipment, medium and product
CN113449808B (en) Multi-source image-text information classification method and corresponding device, equipment and medium
CN115222858A (en) Method and equipment for training animation reconstruction network and image reconstruction and video reconstruction thereof
CN114742991A (en) Poster background image selection, model training, poster generation method and related device
US10674184B2 (en) Dynamic content rendering in media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518000 4601, building 1, phase II, Qianhai Shimao financial center, No. 3040, Xinghai Avenue, Liwan community, Nanshan street, Nanshan District, Shenzhen, Guangdong

Patentee after: SHENZHEN ZHANGZHONG INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 518000 south a, 14th floor, Sangda science and technology building, No. 1, Keji Road, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN ZHANGZHONG INFORMATION TECHNOLOGY Co.,Ltd.