CN112752151B - Method and device for detecting dynamic advertisement implantation position - Google Patents

Method and device for detecting dynamic advertisement implantation position Download PDF

Info

Publication number
CN112752151B
CN112752151B CN202011601342.0A CN202011601342A CN112752151B CN 112752151 B CN112752151 B CN 112752151B CN 202011601342 A CN202011601342 A CN 202011601342A CN 112752151 B CN112752151 B CN 112752151B
Authority
CN
China
Prior art keywords
target
split
video
mirror
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011601342.0A
Other languages
Chinese (zh)
Other versions
CN112752151A (en
Inventor
杨杰
吴振港
胡玮
罗思伟
宋施恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority to CN202011601342.0A priority Critical patent/CN112752151B/en
Publication of CN112752151A publication Critical patent/CN112752151A/en
Application granted granted Critical
Publication of CN112752151B publication Critical patent/CN112752151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The invention discloses a method and a device for detecting a dynamic advertisement implantation position, which are used for segmenting an acquired target video needing advertisement implantation by a scene lens through a lens segmentation algorithm to obtain a plurality of lens segmentation video segments, screening out target lens segmentation video segments containing target objects from the plurality of lens segmentation video segments, screening out target lens segmentation video segments containing the target objects and having relevant parameters meeting preset parameter thresholds from the plurality of target lens segmentation video segments as dynamic advertisement implantation video segments, and performing point position regression on a first frame containing the target objects by adopting a cascaded pyramid network model to the dynamic advertisement implantation video segments to obtain an accurate dynamic advertisement implantation position. The invention utilizes the deep learning and image processing technology to realize the automatic screening and accurate positioning of the dynamic advertisement implantation position, saves the time and cost spent on manually watching the video to detect and mark the dynamic advertisement implantation position, and improves the detection precision and the detection efficiency of the dynamic advertisement implantation position.

Description

Method and device for detecting dynamic advertisement implantation position
Technical Field
The invention relates to the technical field of advertisement implantation, in particular to a method and a device for detecting a dynamic advertisement implantation position.
Background
The advertisement implantation refers to the fusion of advertisement putting materials into the content of the movie and television play, and compared with the mode of inserting advertisements in the head, the tail, the piece and the like, the advertisement implantation is deeply fused with the scene, so that the advertisement is more easily accepted by audiences, and the hidden and demeritized propaganda effect is achieved.
The traditional advertisement implantation is static, the implanted advertisement content is required to be determined well when the advertisement is shot, and the implanted advertisement is not easy to change after the advertisement shooting is finished. In addition, deployment of static advertisement implantation to shooting props also needs to consume a large amount of resources, and particularly, the cost is extremely high when large advertisements such as outdoor high-rise advertising boards are shot.
In order to solve various defects existing in static advertisement implantation, dynamic advertisement implantation is carried out at the same time. The dynamic advertisement implantation refers to the process of carrying out secondary synthesis on the video content which is shot, and inserting advertisement materials into the proper positions of the video so as to achieve the purpose of advertisement implantation. Compared with static advertisement implantation, dynamic advertisement implantation gets rid of the limitation of advertisement shooting, so that the implanted advertisement is not limited by a shooting period any more, meanwhile, the cost of advertisement implantation is reduced, potential advertisement resources in videos are fully utilized, the advertisement selling period and the strain capacity are increased, potential advertisement inventory is excavated, and the advertisement implantation can be flexibly changed according to actual conditions.
For dynamic ad placement, detecting the appropriate ad placement location is the most critical technique. At present, the mainstream in the industry is to treat dynamic advertisement implantation as a video post-production problem, manually detect and mark suitable point positions of advertisement implantation, and then perform mask processing through video editing software such as AE. The existing detection method for the dynamic advertisement implantation position has several problems: firstly, each video needs to manually browse all contents, and then records the proper points of advertisement implantation, so that a large amount of manpower and time are consumed; secondly, the detection efficiency is low, and the judgment standard of the advertisement implantation point position is not clear enough, so that the difficulty of subsequent advertisement implantation is increased.
Disclosure of Invention
In view of the above, the present invention discloses a method and an apparatus for detecting a dynamic advertisement implantation position, so as to automatically screen and accurately position the dynamic advertisement implantation position in a video, thereby saving the time and cost spent on manually watching the video to detect the dynamic advertisement implantation position and marking the dynamic advertisement implantation position, improving the detection accuracy and detection efficiency of the dynamic advertisement implantation position, further reducing the advertisement implantation cost, and increasing the advertisement profit to a certain extent.
A method for detecting a dynamic advertisement placement location, comprising:
carrying out scene shot segmentation on the obtained target video needing advertisement implantation by adopting a lens segmentation algorithm to obtain a plurality of lens segmentation video clips;
screening out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips, wherein the target object is an object suitable for dynamic advertisement implantation;
screening out a target split-mirror video clip which contains relevant parameters of the target object and meets a preset parameter threshold value from the plurality of target split-mirror video clips to serve as a dynamic advertisement implantation video clip, wherein the relevant parameters of the target object comprise: the key frame number, the screen occupation ratio, the definition and the shielded proportion of the target object;
and performing point location regression on the first frame containing the target object by adopting a pre-trained cascade pyramid network model to the dynamic advertisement implantation video clip to obtain an accurate dynamic advertisement implantation position.
Optionally, the method for segmenting the scene shot of the obtained target video needing advertisement implantation by using a split-lens algorithm to obtain a plurality of split-lens video segments specifically includes:
performing scene shot segmentation on the target video by adopting a content change-based lens segmentation algorithm to obtain a plurality of original lens segmentation video clips;
determining the video duration of each original mirror video clip according to the first frame and the last frame of each original mirror video clip;
and screening out the original split-mirror video clips with the video duration not less than the video duration threshold value from the plurality of original split-mirror video clips as the split-mirror video clips.
Optionally, the screening out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips specifically includes:
extracting corresponding intermediate frames from each of the split-mirror video clips in sequence;
carrying out target object detection on each extracted intermediate frame by using a pre-trained target detection model;
and determining the split-mirror video segment corresponding to the intermediate frame of the detected target object as the target split-mirror video segment.
Optionally, the screening out, from the plurality of target split-mirror video segments, a target split-mirror video segment whose related parameter of the target object meets a preset parameter threshold as a dynamic advertisement implantation video segment specifically includes:
counting the number of key frames of the continuous occurrence of the target object in each target split-mirror video clip;
screening target split-mirror video clips with the key frame number not less than a key frame number threshold value from all the target split-mirror video clips, and recording as first target split-mirror video clips;
counting the screen occupation ratio of the target object in each first target split-mirror video clip;
screening out first target split-mirror video clips with the screen occupation ratio not less than a screen occupation ratio threshold value from all the first target split-mirror video clips, and recording as second target split-mirror video clips;
calculating the definition of the target object in each second target split-mirror video clip;
screening out second target split-mirror video clips with the definition not less than a definition threshold value from all the second target split-mirror video clips, and recording as third target split-mirror video clips;
calculating the shielded proportion of the target object in each third target split-mirror video clip;
and screening out the third target split-mirror video segments with the sheltered proportion not greater than the sheltered proportion threshold value from all the third target split-mirror video segments, and taking the third target split-mirror video segments as dynamic advertisement implantation video segments.
Optionally, the method further includes:
and storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position to a database.
A dynamic advertisement placement location detection apparatus, comprising:
the video segmentation unit is used for carrying out scene segmentation on the acquired target video needing advertisement implantation by adopting a mirror segmentation algorithm to obtain a plurality of mirror segmentation video clips;
the first screening unit is used for screening out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips, wherein the target object is an object suitable for dynamic advertisement implantation;
a second screening unit, configured to screen out, from the multiple target split-view video segments, a target split-view video segment that includes a relevant parameter of the target object that meets a preset parameter threshold as a dynamic advertisement implantation video segment, where the relevant parameter of the target object includes: the key frame number, the screen occupation ratio, the definition and the shielded proportion of the target object;
and the advertisement implantation position determining unit is used for performing point location regression on the first frame containing the target object by adopting a pre-trained cascade pyramid network model on the dynamic advertisement implantation video clip to obtain an accurate dynamic advertisement implantation position.
Optionally, the video segmentation unit specifically includes:
the video segmentation subunit is used for performing scene segmentation on the target video by adopting a content change-based mirror segmentation algorithm to obtain a plurality of original mirror-segmented video segments;
a duration determining subunit, configured to determine, according to the first frame and the last frame of each original mirrored video segment, a video duration of each original mirrored video segment;
and the first screening subunit is used for screening out the original split-mirror video clips with the video duration not less than the video duration threshold value from the plurality of original split-mirror video clips as the split-mirror video clips.
Optionally, the first screening unit specifically includes:
an intermediate frame extraction subunit, configured to extract corresponding intermediate frames from each of the split-mirror video segments in sequence;
a detection subunit, configured to perform target object detection on each extracted intermediate frame by using a pre-trained target detection model;
and the video clip determining subunit is configured to determine the split-mirror video clip corresponding to the intermediate frame of the detected target object as the target split-mirror video clip.
Optionally, the second screening unit specifically includes:
a key frame number counting subunit, configured to count the number of key frames in which the target object continuously appears in each of the target split-mirror video clips;
the second screening subunit is used for screening the target split-mirror video clips of which the number of the key frames is not less than the threshold value of the number of the key frames from all the target split-mirror video clips and recording the target split-mirror video clips as first target split-mirror video clips;
a screen occupation ratio counting subunit, configured to count a screen occupation ratio of the target object in each of the first target split-mirror video clips;
the third screening subunit is configured to screen out, from each first target split-mirror video segment, a first target split-mirror video segment whose screen occupation ratio is not less than a screen occupation ratio threshold value, and record the first target split-mirror video segment as a second target split-mirror video segment;
the definition calculating subunit is used for calculating the definition of the target object in each second target split-mirror video clip;
a fourth screening subunit, configured to screen, from the second target split-mirror video segments, a second target split-mirror video segment whose definition is not less than a definition threshold value, and record the second target split-mirror video segment as a third target split-mirror video segment;
an occluded proportion calculation subunit, configured to calculate an occluded proportion of the target object in each of the third target split-mirror video segments;
and the fifth screening subunit is used for screening out the third target sub-lens video segments with the sheltered proportion not greater than the sheltered proportion threshold value from each third target sub-lens video segment, and using the third target sub-lens video segments as dynamic advertisement implantation video segments.
Optionally, the detection apparatus further includes:
and the storage unit is used for storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position to a database.
According to the technical scheme, the method and the device for detecting the dynamic advertisement implantation position are used for segmenting a scene of an obtained target video needing advertisement implantation by adopting a lens splitting algorithm to obtain a plurality of lens splitting video segments, screening out the target lens splitting video segments containing a target object suitable for dynamic advertisement implantation from the plurality of lens splitting video segments, screening out the target lens splitting video segments containing the target object with relevant parameters meeting a preset parameter threshold value from the plurality of target lens splitting video segments to serve as the dynamic advertisement implantation video segments, and performing point position regression on a first frame containing the target object by adopting a pre-trained cascaded pyramid network model on the dynamic advertisement implantation video segments to obtain the accurate dynamic advertisement implantation position. The invention utilizes the deep learning technology and the image processing technology to realize the automatic screening and the accurate positioning of the dynamic advertisement implantation position in the video, thereby saving the time and the cost spent on manually watching the video to detect the dynamic advertisement implantation position and marking, improving the detection precision and the detection efficiency of the dynamic advertisement implantation position, further reducing the advertisement implantation cost and increasing the advertisement profit to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the disclosed drawings without creative efforts.
FIG. 1 is a flowchart of a method for detecting a dynamic advertisement placement location according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for performing scene segmentation on an acquired target video to be advertised by using a split-lens algorithm to obtain a plurality of split-lens video segments, according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for selecting a target split-mirror video clip containing a target object from a plurality of split-mirror video clips according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for screening out a dynamic advertisement placement video clip from a plurality of target split-mirror video clips according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a dynamic advertisement placement position detection apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video slicing unit according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a first screening unit according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a second screening unit according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a method and a device for detecting a dynamic advertisement implantation position, wherein an acquired target video needing advertisement implantation is subjected to scene shot segmentation by adopting a lens segmentation algorithm to obtain a plurality of lens segmentation video segments, a target lens segmentation video segment containing a target object suitable for dynamic advertisement implantation is screened out from the plurality of lens segmentation video segments, a target lens segmentation video segment containing a target object with relevant parameters meeting a preset parameter threshold value is screened out from the plurality of target lens segmentation video segments to serve as a dynamic advertisement implantation video segment, and a pre-trained cascade pyramid network model is adopted to perform point location regression on a first frame containing the target object through the dynamic advertisement implantation video segment to obtain an accurate dynamic advertisement implantation position. The invention utilizes the deep learning technology and the image processing technology to realize the automatic screening and the accurate positioning of the dynamic advertisement implantation position in the video, thereby saving the time and the cost spent on manually watching the video to detect the dynamic advertisement implantation position and mark the dynamic advertisement implantation position, improving the detection precision and the detection efficiency of the dynamic advertisement implantation position, further reducing the advertisement implantation cost and increasing the advertisement profit to a certain extent.
Referring to fig. 1, a flowchart of a method for detecting a dynamic advertisement placement position disclosed in an embodiment of the present invention includes:
s101, carrying out scene segmentation on the obtained target video needing advertisement implantation by adopting a split-lens algorithm to obtain a plurality of split-lens video clips;
in practical applications, the target video to be advertised can be obtained from the media asset database. For subsequent searching, a saving path of the target video can be recorded.
In this embodiment, a target video may be subjected to scene cut segmentation by using a content change-based split-mirror algorithm, so as to obtain a plurality of split-mirror video segments. In the invention, the dynamic advertisement implantation position is detected by taking the split-mirror video clip as a basic unit.
S102, screening a target split-mirror video clip containing a target object from a plurality of split-mirror video clips;
the target object is an object suitable for dynamic advertisement implantation, such as a billboard, a picture frame, a television screen, a billboard, an LED (Light Emitting Diode) screen, and the like.
Step S103, screening out target split-mirror video clips, of which the related parameters of the target object meet preset parameter thresholds, from the plurality of target split-mirror video clips to serve as dynamic advertisement implantation video clips;
wherein the relevant parameters of the target object include: the key frame number, the screen occupation ratio, the definition and the shielded ratio of the target object.
And S104, performing point location regression on the first frame containing the target object by adopting a pre-trained cascade pyramid network model to the dynamic advertisement implantation video clip to obtain an accurate dynamic advertisement implantation position.
Wherein, the first frame of the target object is also the first key frame of the target object.
In practical applications, a Cascaded Pyramid Network (CPN) can be trained using an image data set labeled with an accurate position. The whole network structure is composed of two parts, simple key points are identified by using a characteristic pyramid network Globalenet, and difficult key points identified by the characteristic pyramid network are fitted by using a RefineNet network, so that more accurate point position regression is realized.
The implementation process of step S104 is specifically as follows:
determining a first frame of a target object appearing in a dynamic advertisement implantation video clip;
and acquiring a target object screenshot according to a Bbox of the target object position, wherein the Bbox represents a value of the target object position, and the specific position of the target object is determined according to a frame formed by the coordinates (x1, y1) of the upper left corner of each frame position of the target object and the coordinates (x2, y2) of the lower right corner of each frame position of the target object.
And performing point location regression on the first frame of the target object in the screenshot of the target object by using a pre-trained cascade pyramid network model with a key point detection function to obtain accurate point location data, wherein the point location data is also the dynamic advertisement implantation location.
In summary, the invention discloses a method for detecting a dynamic advertisement implantation position, which comprises the steps of segmenting a scene shot of an obtained target video needing advertisement implantation by adopting a lens splitting algorithm to obtain a plurality of lens splitting video segments, screening out target lens splitting video segments containing target objects suitable for dynamic advertisement implantation from the plurality of lens splitting video segments, screening out target lens splitting video segments containing relevant parameters of the target objects meeting preset parameter thresholds from the plurality of target lens splitting video segments to serve as dynamic advertisement implantation video segments, and performing point location regression on a first frame containing the target objects by adopting a pre-trained cascaded pyramid network model to the dynamic advertisement implantation video segments to obtain an accurate dynamic advertisement implantation position. The invention utilizes the deep learning technology and the image processing technology to realize the automatic screening and the accurate positioning of the dynamic advertisement implantation position in the video, thereby saving the time and the cost spent on manually watching the video to detect the dynamic advertisement implantation position and mark the dynamic advertisement implantation position, improving the detection precision and the detection efficiency of the dynamic advertisement implantation position, further reducing the advertisement implantation cost and increasing the advertisement profit to a certain extent.
In order to facilitate the subsequent search of the dynamic advertisement placement position, after step S104, the method may further include:
and storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position into a relevant database.
The information related to the dynamic advertisement placement video clip may include: video number, the number of frames of the target object appearing for the first time, the duration of the appearance of the target object, the category of the target object, point location data and the like.
And when all the split-mirror video clips under the target video are not completely detected, continuously screening the split-mirror video clips until all the split-mirror video clips under the target video are completely detected.
And after the dynamic advertisement implantation position detection of the current target video is finished, the dynamic advertisement implantation position detection of other target videos in the media asset database is continuously carried out.
Because the invention detects the position suitable for dynamic advertisement implantation from a plurality of split-mirror video clips, in order to improve the detection efficiency, the invention preliminarily screens the split-mirror video clips obtained by splitting before determining the dynamic advertisement implantation position.
Therefore, in order to further optimize the above embodiment, referring to fig. 2, a flowchart of a method for obtaining a plurality of video segments by performing scene segmentation on an acquired target video needing advertisement implantation by using a split-view algorithm is disclosed in the embodiment of the present invention, that is, step S101 may specifically include:
step S201, a target video is subjected to scene shot segmentation by adopting a content change-based split mirror algorithm to obtain a plurality of original split mirror video clips;
step S202, determining the video duration of each original mirror video clip according to the head frame and the tail frame of each original mirror video clip;
the first frame and the last frame of the original split-mirror video clip are both key frames.
Specifically, the video duration clip _ duration of a single original split-mirror video clip is calculated according to the following formula:
clip_duration=(end_frame_index-begin_frame_index)/fps;
where end _ frame _ index is the last frame of a single original split-mirror video segment, begin _ frame _ index is the first frame of a single original split-mirror video segment, and fps is the frame rate of a single original split-mirror video segment.
And S203, screening out the original split-mirror video clips with the video duration not less than the video duration threshold value from the plurality of original split-mirror video clips to serve as the split-mirror video clips.
The value of the video duration threshold is determined according to actual needs, for example, 3s, and the present invention is not limited herein.
It should be noted that, in step S203, the split-mirror video segments obtained by screening from the multiple original split-mirror video segments, that is, the split-mirror video segments of the basic unit for performing dynamic advertisement placement position detection subsequently, are obtained.
In this embodiment, the original split-mirror video segments with the video duration less than the video duration threshold are discarded.
To further optimize the above embodiment, referring to fig. 3, a flowchart of a method for screening a target split-mirror video clip containing a target object from a plurality of split-mirror video clips disclosed in the embodiment of the present invention is also disclosed, that is, step S102 specifically includes:
s301, sequentially extracting corresponding intermediate frames from each split-mirror video clip;
step S302, performing target object detection on each extracted intermediate frame by using a pre-trained target detection model;
the training process of the Yolov5 target detection model comprises the following steps: and training the Yolov5 network by using an image data set which contains a target object suitable for dynamic advertisement implantation and is marked with a boundary box to obtain a Yolov5 target detection model, wherein the Yolov5 target detection model has the capability of detecting the target object.
Step S303, determining the split mirror video clip corresponding to the intermediate frame of the detected target object as the target split mirror video clip.
It should be noted that, if the target object is not detected from the intermediate frame, it is determined that the split-mirror video segment corresponding to the intermediate frame in which the target object is not detected is not suitable for dynamic advertisement implantation, at this time, the split-mirror video segment corresponding to the intermediate frame in which the target object is not detected is discarded, the corresponding intermediate frame is continuously extracted from the next split-mirror video segment, and the target object is detected again from the intermediate frame of the next split-mirror video segment.
To further optimize the above embodiment, referring to fig. 4, a flowchart of a method for screening out a dynamic advertisement placement video clip from a plurality of target split-mirror video clips disclosed in the embodiment of the present invention is that step S103 may specifically include:
s401, counting the number of key frames of continuous occurrence of target objects in each target split-mirror video clip;
when the number of key frames continuously appearing in the target object is counted, the middle frames of the target split-mirror video clip are used as references and respectively expanded to the left side and the right side of the middle frames, and one key frame is extracted from every ten frames.
S402, screening target split-mirror video clips with the key frame number not less than a key frame number threshold from all the target split-mirror video clips, and recording as first target split-mirror video clips;
it should be noted that the target split-mirror video segments with the number of key frames smaller than the threshold value of the number of key frames are discarded, and the step S203 in the embodiment shown in fig. 2 is returned to, and the preliminary screening is continuously performed on the split-mirror video segments.
In practical application, the first target split-mirror video clip can be screened according to the number of key frames, and the screening can be carried out according to the continuous occurrence time of the target object.
Specifically, (1) a calculation formula for calculating the continuous occurrence time of the target object in each target split-mirror video segment is as follows:
object_display_duration=frame_nums/fps;
in the formula, the object _ display _ duration is a duration of continuous occurrence of the target object in the single target split-mirror video segment, the frame _ nums is a key frame number of continuous occurrence of the target object in the single target split-mirror video segment, and fps is a frame frequency.
(2) And screening out the target split-mirror video clips with the continuous occurrence time length of the target object not less than the preset time length from all the target split-mirror video clips as first target split-mirror video clips.
And discarding the target split-mirror video clips with the continuous appearance time of the target object being lower than the preset time.
Step S403, counting the screen occupation ratio of the target object in each first target split-mirror video clip;
wherein, the screen occupation ratio of the target object refers to the percentage of the target object in the whole screen.
Specifically, the process of calculating the screen occupation ratio of the target object is as follows:
assume that the size of the single first target split-mirror video clip is (width, height), width is the width, and height is the height.
(1) A frame _ screen _ ratio of the target object of each frame ((x2-x1) ((y 2-y 1))/(width) height) is calculated, where (x1, y1) is the coordinates of the upper left corner of each frame position of the target object and (x2, y2) is the coordinates of the lower right corner of each frame position of the target object.
(2) Counting the average screen occupation ratio clip _ screen _ ratio in the continuous appearance process of the target object, wherein the formula is as follows:
Figure BDA0002871164100000111
(3) the average duty ratio clip _ screen _ ratio is used as the duty ratio of the target object.
S404, screening out first target split-mirror video clips with the screen occupation ratio of a target object being not less than a screen occupation ratio threshold value from all the first target split-mirror video clips, and recording as second target split-mirror video clips;
and discarding the first target split-mirror video clip with the screen occupation ratio of the target object smaller than the screen occupation ratio threshold value.
The value of the screen occupation ratio threshold is determined according to actual needs, such as 0.1.
It should be noted that, in practical applications, the screen occupation ratio of the target object cannot be too large, and usually cannot exceed 0.3, so as to meet the advertisement implantation requirement.
S405, calculating the definition of a target object in each second target split-mirror video clip;
specifically, in practical applications, the laplacian operator may be used to calculate the sharpness of the target object at each frame.
S406, screening out second target split-mirror video clips with the definition of the target object not less than a definition threshold value from all the second target split-mirror video clips, and recording as third target split-mirror video clips;
the value of the sharpness threshold is determined according to actual needs, and the invention is not limited herein.
In this embodiment, the second target split-mirror video segment with the definition smaller than the definition threshold is discarded.
Step S407, calculating the shielded proportion of the target object in each third target split-mirror video clip;
in practical application, each frame of each third target split-mirror video segment can be semantically segmented by using a Deeplabv3 segmentation model based on Mobilenetv2, the overlapping area of objects such as people, vehicles and plants and the target object region is calculated, and the occluded proportion of the target object is determined based on the overlapping area.
Wherein, Mobilenetv2 is a lightweight backbone network, and Deeplabv3 is a split network.
Step S408, screening out a third target sub-lens video segment of which the shielded proportion of the target object is not more than the shielded proportion threshold value from all the third target sub-lens video segments, and taking the third target sub-lens video segment as a dynamic advertisement implantation video segment.
And discarding the third target split-mirror video clip with the occluded proportion of the target object larger than the occluded proportion threshold, and returning to the step S203 to screen the next split-mirror video clip.
Corresponding to the embodiment of the method, the invention also discloses a device for detecting the dynamic advertisement implantation position.
Referring to fig. 5, a schematic structural diagram of a device for detecting a dynamic advertisement placement position disclosed in an embodiment of the present invention includes:
the video segmentation unit 501 is configured to segment a scene shot of an acquired target video needing advertisement implantation by using a split-lens algorithm to obtain a plurality of split-lens video segments;
in practical applications, the target video to be advertised can be obtained from the media asset database. For subsequent searching, a saving path of the target video can be recorded.
In this embodiment, a target video may be subjected to scene cut segmentation by using a content change-based split-mirror algorithm, so as to obtain a plurality of split-mirror video segments. In the invention, the dynamic advertisement implantation position is detected by taking the split-mirror video clip as a basic unit.
A first filtering unit 502, configured to filter out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips;
the target object is an object suitable for dynamic advertisement implantation, such as a billboard, a picture frame, a television screen, a billboard, an LED (Light Emitting Diode) screen, and the like.
A second screening unit 503, configured to screen out, from the multiple target split-view video segments, a target split-view video segment that includes a target object whose related parameter meets a preset parameter threshold as a dynamic advertisement implantation video segment;
wherein the relevant parameters of the target object include: the key frame number, the screen occupation ratio, the definition and the shielded ratio of the target object.
And an advertisement implantation position determining unit 504, configured to perform point location regression on the first frame including the target object by using a pre-trained cascaded pyramid network model on the dynamic advertisement implantation video clip, so as to obtain an accurate dynamic advertisement implantation position.
Wherein, the first frame of the target object is also the first key frame of the target object.
In practical applications, a Cascaded Pyramid Network (CPN) can be trained using an image dataset with precise locations labeled. The whole network structure is composed of two parts, simple key points are identified by using a characteristic pyramid network Globalenet, and difficult key points identified by the characteristic pyramid network are fitted by using a RefineNet network, so that more accurate point position regression is realized.
Thus, the ad placement location determining unit 504 may be specifically configured to:
determining a first frame of a target object appearing in the dynamic advertisement implantation video clip;
and acquiring a screenshot of the target object according to a Bbox of the position of the target object, wherein the Bbox represents a value of the position of the target object, and the specific position of the target object is determined according to a box formed by coordinates (x1, y1) of the upper left corner of each frame position of the target object and coordinates (x2, y2) of the lower right corner of each frame position of the target object.
And performing point location regression on the first frame of the target object in the screenshot of the target object by using a pre-trained cascade pyramid network model with a key point detection function to obtain accurate point location data, wherein the point location data is also the dynamic advertisement implantation location.
In summary, the invention discloses a detection device for a dynamic advertisement implantation position, which is used for carrying out scene shot segmentation on an obtained target video needing advertisement implantation to obtain a plurality of split-mirror video segments, screening out the target split-mirror video segments containing a target object suitable for dynamic advertisement implantation from the plurality of split-mirror video segments, screening out the target split-mirror video segments containing the target object with relevant parameters meeting a preset parameter threshold value from the plurality of target split-mirror video segments as dynamic advertisement implantation video segments, and carrying out point location regression on a first frame containing the target object by adopting a pre-trained cascade pyramid network model for the dynamic advertisement implantation video segments to obtain an accurate dynamic advertisement implantation position. The invention utilizes the deep learning technology and the image processing technology to realize the automatic screening and the accurate positioning of the dynamic advertisement implantation position in the video, thereby saving the time and the cost spent on manually watching the video to detect the dynamic advertisement implantation position and mark the dynamic advertisement implantation position, improving the detection precision and the detection efficiency of the dynamic advertisement implantation position, further reducing the advertisement implantation cost and increasing the advertisement profit to a certain extent.
In order to facilitate the subsequent search of the dynamic advertisement placement position, the detection device may further include:
and the storage unit is used for storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position to a database.
The information related to the dynamic advertisement placement video clip may include: video number, the number of frames of the target object appearing for the first time, the duration of the appearance of the target object, the category of the target object, point location data and the like.
And when all the split-mirror video clips in the target video are not completely detected, continuously screening the split-mirror video clips until all the split-mirror video clips in the target video are completely detected.
And after the dynamic advertisement implantation position detection of the current target video is finished, continuously detecting the dynamic advertisement implantation position of other target videos in the media asset database.
Because the invention detects the position suitable for dynamic advertisement implantation from a plurality of split-mirror video clips, in order to improve the detection efficiency, the invention preliminarily screens the split-mirror video clips obtained by splitting before determining the dynamic advertisement implantation position.
Therefore, to further optimize the above embodiment, referring to fig. 6, a schematic structural diagram of a video segmentation unit disclosed in the embodiment of the present invention, the video segmentation unit may specifically include:
a video segmentation subunit 601, configured to perform scene segmentation on the target video by using a content change-based mirror segmentation algorithm to obtain a plurality of original mirror-segmented video segments;
a duration determining subunit 602, configured to determine, according to the first frame and the last frame of each original mirror-divided video segment, a video duration of each original mirror-divided video segment;
the first frame and the last frame of the original split-mirror video clip are both key frames.
Specifically, the video duration clip _ duration of a single original split-mirror video clip is calculated according to the following formula:
clip_duration=(end_frame_index-begin_frame_index)/fps;
in the formula, end _ frame _ index is the last frame of a single original split-mirror video segment, begin _ frame _ index is the first frame of the single original split-mirror video segment, and fps is the frame rate of the single original split-mirror video segment.
A first screening subunit 603, configured to screen, from the multiple original split-mirror video segments, an original split-mirror video segment whose video duration is not less than a video duration threshold as the split-mirror video segment.
The value of the video duration threshold is determined according to actual needs, for example, 3s, and the present invention is not limited herein.
In this embodiment, the original split-mirror video segments with the video duration less than the video duration threshold are discarded.
In order to further optimize the foregoing embodiment, referring to fig. 7, a schematic structural diagram of a first screening unit disclosed in the embodiment of the present invention, the first screening unit may specifically include:
an intermediate frame extraction subunit 701, configured to extract corresponding intermediate frames from each of the split-mirror video segments in sequence;
a detection subunit 702, configured to perform target object detection on each extracted intermediate frame by using a pre-trained target detection model;
the training process of the Yolov5 target detection model comprises the following steps: and training the Yolov5 network by using an image data set which contains a target object suitable for dynamic advertisement implantation and is marked with a boundary box to obtain a Yolov5 target detection model, wherein the Yolov5 target detection model has the capability of detecting the target object.
A video segment determining subunit 703, configured to determine the split-mirror video segment corresponding to the intermediate frame of the detected target object as the target split-mirror video segment.
It should be noted that, if the target object is not detected from the intermediate frame, it is determined that the split-mirror video segment corresponding to the intermediate frame in which the target object is not detected is not suitable for dynamic advertisement implantation, at this time, the split-mirror video segment corresponding to the intermediate frame in which the target object is not detected is discarded, the corresponding intermediate frame is continuously extracted from the next split-mirror video segment, and the target object is detected again from the intermediate frame of the next split-mirror video segment.
Referring to fig. 8, in an embodiment of the structural schematic diagram of a second screening unit, the second screening unit may specifically include:
a key frame number counting subunit 801, configured to count the number of key frames in which the target object continuously appears in each of the target split-mirror video segments;
when the number of key frames continuously appearing in the target object is counted, the middle frames of the target split-mirror video clip are used as references and respectively expanded to the left side and the right side of the middle frames, and one key frame is extracted from every ten frames.
A second screening subunit 802, configured to screen out, from each of the target mirrored video segments, a target mirrored video segment whose key frame number is not less than a key frame number threshold, and mark the target mirrored video segment as a first target mirrored video segment;
in practical application, the first target split-mirror video clip can be screened according to the number of key frames, and the screening can be carried out according to the continuous occurrence time of the target object.
Specifically, (1) a calculation formula for calculating the continuous occurrence time of the target object in each target split-mirror video clip is as follows:
object_display_duration=frame_nums/fps;
in the formula, the object _ display _ duration is a duration of continuous occurrence of the target object in the single target split-mirror video segment, the frame _ nums is a key frame number of continuous occurrence of the target object in the single target split-mirror video segment, and fps is a frame frequency.
(2) And screening out the target split-mirror video clips with the target object continuous occurrence time length not less than the preset time length from all the target split-mirror video clips as first target split-mirror video clips.
And discarding the target split-mirror video clips with the continuous appearance time of the target object being lower than the preset time.
A screen occupation ratio statistics subunit 803, configured to count the screen occupation ratio of the target object in each first target split-mirror video segment;
wherein, the screen occupation ratio of the target object refers to the percentage of the target object in the whole screen.
Specifically, the process of calculating the screen occupation ratio of the target object is as follows:
assume that the size of the single first target split-mirror video clip is (width, height), where width is the width and height is the height.
(1) The screen occupation ratio frame _ screen _ ratio of the target object of each frame ((x2-x1) (y2-y1))/(width height) is calculated, where (x1, y1) is the upper left-hand corner coordinate of each frame position of the target object and (x2, y2) is the lower right-hand corner coordinate of each frame position of the target object.
(2) Counting the average screen occupation ratio clip _ screen _ ratio in the process of continuously appearing the target object, wherein the formula is as follows:
Figure BDA0002871164100000171
(3) the average screen occupation ratio clip _ screen _ ratio is used as the screen occupation ratio of the target object.
A third screening subunit 804, configured to screen out, from each first target split-mirror video segment, the first target split-mirror video segment whose screen occupation ratio is not less than a screen occupation ratio threshold, and record the first target split-mirror video segment as a second target split-mirror video segment;
and discarding the first target split-mirror video clip with the screen occupation ratio of the target object smaller than the screen occupation ratio threshold value.
The value of the duty ratio threshold depends on actual needs, such as 0.1.
It should be noted that, in practical applications, the screen occupation ratio of the target object cannot be too large, and usually cannot exceed 0.3, so as to meet the advertisement implantation requirement.
A sharpness calculation subunit 805 configured to calculate a sharpness of the target object in each of the second target split-mirror video segments;
specifically, in practical applications, the laplacian operator may be used to calculate the sharpness of the target object at each frame.
A fourth screening subunit 806, configured to screen, from each second target split-mirror video segment, a second target split-mirror video segment whose definition is not less than a definition threshold, and record the second target split-mirror video segment as a third target split-mirror video segment;
the value of the sharpness threshold is determined according to actual needs, and the invention is not limited herein.
In this embodiment, the second target split-mirror video segment with the definition smaller than the definition threshold is discarded.
An occluded proportion calculating subunit 807, configured to calculate an occluded proportion of the target object in each of the third target split mirror video segments;
in practical application, each frame of each third target split-mirror video segment can be semantically segmented by using a Deeplabv3 segmentation model based on Mobilenetv2, the overlapping area of objects such as people, vehicles and plants and the target object region is calculated, and the occluded proportion of the target object is determined based on the overlapping area.
Wherein, Mobilenetv2 is a lightweight backbone network, and Deeplabv3 is a split network.
A fifth screening subunit 808, configured to screen, from each third target split-view video segment, a third target split-view video segment whose blocked ratio is not greater than the blocked ratio threshold, and implant the third target split-view video segment as a dynamic advertisement.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for detecting a dynamic advertisement placement location, comprising:
carrying out scene shot segmentation on the obtained target video needing advertisement implantation by adopting a lens segmentation algorithm to obtain a plurality of lens segmentation video clips;
screening out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips, wherein the target object is an object suitable for dynamic advertisement implantation;
screening out a target split-mirror video clip which contains relevant parameters of the target object and meets a preset parameter threshold value from the plurality of target split-mirror video clips to serve as a dynamic advertisement implantation video clip, wherein the relevant parameters of the target object comprise:
the key frame number, the screen occupation ratio, the definition and the shielded proportion of the target object;
performing point location regression on the first frame containing the target object by adopting a pre-trained cascade pyramid network model to the dynamic advertisement implantation video clip to obtain an accurate dynamic advertisement implantation position;
the screening out, from the plurality of target split-mirror video segments, a target split-mirror video segment which contains a target object and has a relevant parameter meeting a preset parameter threshold as a dynamic advertisement implantation video segment specifically includes:
counting the number of key frames of the target object continuously appearing in each target split-mirror video clip;
screening target split-mirror video clips with the key frame number not less than a key frame number threshold value from all the target split-mirror video clips, and recording as first target split-mirror video clips;
counting the screen occupation ratio of the target object in each first target split-mirror video clip;
screening out first target split mirror video clips with the screen occupation ratio not less than a screen occupation ratio threshold value from all the first target split mirror video clips, and recording as second target split mirror video clips;
calculating the definition of the target object in each second target split-mirror video clip;
screening out second target split-mirror video clips with the definition not less than a definition threshold value from all the second target split-mirror video clips, and recording as third target split-mirror video clips;
calculating the shielded proportion of the target object in each third target split-mirror video clip;
and screening out the third target split-mirror video segments with the sheltered proportion not greater than the sheltered proportion threshold value from all the third target split-mirror video segments, and taking the third target split-mirror video segments as dynamic advertisement implantation video segments.
2. The detection method according to claim 1, wherein the scene segmentation is performed on the obtained target video needing advertisement implantation by using a split-lens algorithm to obtain a plurality of split-lens video segments, specifically comprising:
performing scene shot segmentation on the target video by adopting a content change-based lens segmentation algorithm to obtain a plurality of original lens segmentation video clips;
determining the video duration of each original mirror video clip according to the first frame and the last frame of each original mirror video clip;
and screening out the original split-mirror video clips with the video duration not less than the video duration threshold value from the plurality of original split-mirror video clips as the split-mirror video clips.
3. The detection method according to claim 1, wherein the step of screening out a target split-mirror video segment containing a target object from the plurality of split-mirror video segments specifically comprises:
extracting corresponding intermediate frames from each of the split-mirror video clips in sequence;
carrying out target object detection on each extracted intermediate frame by using a pre-trained target detection model;
and determining the split-mirror video segment corresponding to the intermediate frame of the detected target object as the target split-mirror video segment.
4. The detection method according to claim 1, further comprising:
and storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position to a database.
5. A dynamic advertisement placement position detection device, comprising:
the video segmentation unit is used for carrying out scene segmentation on the acquired target video needing advertisement implantation by adopting a mirror segmentation algorithm to obtain a plurality of mirror segmentation video clips;
the first screening unit is used for screening out a target split-mirror video clip containing a target object from the plurality of split-mirror video clips, wherein the target object is an object suitable for dynamic advertisement implantation;
a second screening unit, configured to screen out, from the multiple target split-view video segments, a target split-view video segment that includes a relevant parameter of the target object that meets a preset parameter threshold as a dynamic advertisement implantation video segment, where the relevant parameter of the target object includes: the key frame number, the screen occupation ratio, the definition and the shielded ratio of the target object;
the advertisement implantation position determining unit is used for performing point location regression on the first frame containing the target object by adopting a pre-trained cascade pyramid network model on the dynamic advertisement implantation video clip to obtain an accurate dynamic advertisement implantation position;
the second screening unit specifically includes:
a key frame number counting subunit, configured to count the number of key frames in which the target object continuously appears in each of the target split-mirror video clips;
the second screening subunit is used for screening the target split-mirror video clips of which the number of the key frames is not less than the threshold value of the number of the key frames from all the target split-mirror video clips and recording the target split-mirror video clips as first target split-mirror video clips;
a screen occupation ratio counting subunit, configured to count a screen occupation ratio of the target object in each of the first target split-mirror video clips;
the third screening subunit is configured to screen out, from each first target split-mirror video segment, a first target split-mirror video segment whose screen occupation ratio is not less than a screen occupation ratio threshold value, and record the first target split-mirror video segment as a second target split-mirror video segment;
the definition calculating subunit is used for calculating the definition of the target object in each second target split-mirror video clip;
a fourth screening subunit, configured to screen, from the second target split-mirror video segments, a second target split-mirror video segment whose definition is not less than a definition threshold value, and record the second target split-mirror video segment as a third target split-mirror video segment;
an occluded proportion calculation subunit, configured to calculate an occluded proportion of the target object in each of the third target split-mirror video segments;
and the fifth screening subunit is used for screening out the third target sub-lens video segments with the sheltered proportion not greater than the sheltered proportion threshold value from each third target sub-lens video segment, and using the third target sub-lens video segments as dynamic advertisement implantation video segments.
6. The detection apparatus according to claim 5, wherein the video slicing unit specifically includes:
the video segmentation subunit is used for performing scene segmentation on the target video by adopting a content change-based mirror segmentation algorithm to obtain a plurality of original mirror-segmented video segments;
a duration determining subunit, configured to determine, according to the first frame and the last frame of each original mirror-divided video segment, a video duration of each original mirror-divided video segment;
and the first screening subunit is used for screening out the original split-mirror video clips with the video duration not less than the video duration threshold value from the plurality of original split-mirror video clips as the split-mirror video clips.
7. The detection apparatus according to claim 5, wherein the first screening unit specifically includes:
an intermediate frame extraction subunit, configured to extract corresponding intermediate frames from each of the split-mirror video segments in sequence;
the detection subunit is used for carrying out target object detection on each extracted intermediate frame by using a pre-trained target detection model;
and the video clip determining subunit is used for determining the split-mirror video clip corresponding to the intermediate frame of the detected target object as the target split-mirror video clip.
8. The detection device according to claim 5, further comprising:
and the storage unit is used for storing the relevant information of the dynamic advertisement implantation video clip and the corresponding dynamic advertisement implantation position to a database.
CN202011601342.0A 2020-12-30 2020-12-30 Method and device for detecting dynamic advertisement implantation position Active CN112752151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011601342.0A CN112752151B (en) 2020-12-30 2020-12-30 Method and device for detecting dynamic advertisement implantation position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011601342.0A CN112752151B (en) 2020-12-30 2020-12-30 Method and device for detecting dynamic advertisement implantation position

Publications (2)

Publication Number Publication Date
CN112752151A CN112752151A (en) 2021-05-04
CN112752151B true CN112752151B (en) 2022-09-20

Family

ID=75647295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011601342.0A Active CN112752151B (en) 2020-12-30 2020-12-30 Method and device for detecting dynamic advertisement implantation position

Country Status (1)

Country Link
CN (1) CN112752151B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113490028A (en) * 2021-06-01 2021-10-08 深圳喜悦机器人有限公司 Video processing method, device, storage medium and terminal
CN113345022B (en) * 2021-07-05 2023-02-17 湖南快乐阳光互动娱乐传媒有限公司 Dynamic three-dimensional advertisement implanting method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581430B (en) * 2013-10-21 2018-06-19 华为技术有限公司 Advertisement cut-in method and equipment in video playing
WO2016028813A1 (en) * 2014-08-18 2016-02-25 Groopic, Inc. Dynamically targeted ad augmentation in video
CN110415005A (en) * 2018-04-27 2019-11-05 华为技术有限公司 Determine the method, computer equipment and storage medium of advertisement insertion position
CN112153483B (en) * 2019-06-28 2022-05-13 腾讯科技(深圳)有限公司 Information implantation area detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN112752151A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
US11861903B2 (en) Methods and apparatus to measure brand exposure in media streams
US20220070405A1 (en) Detection of Transitions Between Text and Non-Text Frames in a Video Stream
US6501856B2 (en) Scheme for extraction and recognition of telop characters from video data
US8326042B2 (en) Video shot change detection based on color features, object features, and reliable motion information
CN100342376C (en) System and method for analyzing video content using detected text in video frames
EP2034426A1 (en) Moving image analyzing, method and system
JP5355422B2 (en) Method and system for video indexing and video synopsis
US7916894B1 (en) Summary of a video using faces
CN112752151B (en) Method and device for detecting dynamic advertisement implantation position
Chen et al. Visual storylines: Semantic visualization of movie sequence
CN111861572B (en) Advertisement putting method and device, electronic equipment and computer readable storage medium
CN104735521B (en) A kind of roll titles detection method and device
WO2020259510A1 (en) Method and apparatus for detecting information embedding region, electronic device, and storage medium
CN111401238A (en) Method and device for detecting character close-up segments in video
CN110933520B (en) Monitoring video display method based on spiral abstract and storage medium
Tsao et al. Thumbnail image selection for VOD services
CN114189754A (en) Video plot segmentation method and system
CN116074582A (en) Implant position determining method and device, electronic equipment and storage medium
CN116074581A (en) Implant position determining method and device, electronic equipment and storage medium
KR102059667B1 (en) System for detecting a background place of a shot in a video contents and method thereof
CA2643532A1 (en) Methods and apparatus to measure brand exposure in media streams and to specify regions of interest in associated video frames

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant