CN108471543A - A kind of advertisement information adding method and device - Google Patents
A kind of advertisement information adding method and device Download PDFInfo
- Publication number
- CN108471543A CN108471543A CN201810200729.1A CN201810200729A CN108471543A CN 108471543 A CN108471543 A CN 108471543A CN 201810200729 A CN201810200729 A CN 201810200729A CN 108471543 A CN108471543 A CN 108471543A
- Authority
- CN
- China
- Prior art keywords
- product
- video file
- image
- video
- advertisement information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Abstract
A kind of advertisement information adding method and device are disclosed in the embodiment of the present invention, image recognition is carried out to video file and generates image recognition result, described image recognition result includes the time of occurrence point of the product for including and each product in the video file in the video file;According to time of occurrence point of each product in the video file, determine the optimal displaying time of the advertisement information of each product, and the corresponding scene of optimal displaying time by mainstream industry product category in the advertisement information of each product carries out graded class mark to each product;Obtain the publicity product that user selects from the product for including in the video file, the video file be played to it is described publicity product mark where scene when, by it is described publicity product advertisement information be added to it is described publicity product mark where scene in be shown.Based on the above method and device, the application efficiency of video can be improved.
Description
Technical field
The present invention relates to video identification technology fields, and in particular to a kind of advertisement information adding method and device.
Background technology
Image recognition technology refers to being handled image, analyzed and being understood using computer, to identify various different moulds
The target of formula and the technology of object.With reaching its maturity for image recognition technology, application range is also increasingly wider, currently based on figure
Can accurately identify the integrated informations such as object category, position, the confidence level in picture as identification technology, but video field still
The application for not having scale, due to can not scale identify video image, cause video to be not used to it other than being watched
In terms of him, cause Video Applications less efficient.
Invention content
In view of this, a kind of advertisement information adding method of offer of the embodiment of the present invention and device, can improve answering for video
Use efficiency.
To achieve the above object, the embodiment of the present invention provides the following technical solutions:
A kind of advertisement information adding method, including:
Image recognition is carried out to video file and generates image recognition result, described image recognition result includes the video text
Time of occurrence point of the product and each product for including in part in the video file;
According to time of occurrence point of each product in the video file, the circular letter of each product is determined
Breath the optimal displaying time, and by mainstream industry product category the advertisement information of each product the optimal displaying time pair
The scene answered carries out graded class mark to each product.
The publicity product that user selects from the product for including in the video file is obtained, is played in the video file
When to scene where the mark of the publicity product, the advertisement information of the publicity product is added to the publicity product
It is shown in scene where marking.
Optionally, described that image recognition generation image recognition result is carried out to video file, including:
The product that image in identification video file includes.
Optionally, before the product that the image in the identification video file includes, further include:
Deep learning is carried out for image data collection using Google Inception V3 algorithms, obtains image classification mould
Type.
Optionally, the mistake for carrying out deep learning for image data collection using Google inception V3 algorithms
Cheng Zhong further includes:
Deep learning model is improved based on Annotator Open Images image data collection.
Optionally, the product that the image in the identification video file includes, specifically includes:
Based on the edge detection algorithm for the computer vision library Open CV that increase income, it is extracted and preserved in the video file
Key frame of video;
The key frame of video is screened using RandomForest algorithms;
According to image classification model, the key frame of video after screening is known using Detector SSD algorithms
Not, the product for including in the key frame of video after screening is determined.
A kind of advertisement information adding set, including:
Picture recognition module generates image recognition result, described image identification for carrying out image recognition to video file
As a result include the time of occurrence point in the video file of product and each product for including in the video file;
Sifting sort module determines institute for the time of occurrence point according to each product in the video file
State the optimal displaying time of the advertisement information of each product, and by mainstream industry product category each product circular letter
The corresponding scene of optimal displaying time of breath carries out graded class mark to each product.
Advertisement information putting module, the publicity production selected from the product for including in the video file for obtaining user
Product, in the scene where the mark that the video file is played to the publicity product, by the circular letter of the publicity product
It ceases to be added in the scene where the mark of the publicity product and be shown.
Optionally, described image identification module is specifically used for:
Identify the product that the image in the video file includes.
Optionally, described device further includes:
Image classification model acquisition module, for it is described identification video file in image include product before, adopt
Deep learning is carried out for image data collection with Google Inception V3 algorithms, obtains image classification model.
Optionally, described image disaggregated model acquisition module, is specifically used for:
During the use Google inception V3 algorithms carry out deep learning for image data collection,
Deep learning model is improved based on Annotator Open Images image data collection.
Optionally, described image identification module is specifically used for:
Based on the edge detection algorithm for the computer vision library Open CV that increase income, it is extracted and preserved in the video file
Key frame of video;
The key frame of video is screened using RandomForest algorithms;
According to image classification model, the key frame of video after screening is known using Detector SSD algorithms
Not, the product for including in the key frame of video after screening is determined.
Based on the above-mentioned technical proposal, a kind of advertisement information adding method and device are disclosed in the embodiment of the present invention, to regarding
Frequency file carries out image recognition and generates image recognition result, and described image recognition result includes the production for including in the video file
The time of occurrence point of product and each product in the video file;According to each product in the video file
Time of occurrence point determines the optimal displaying time of the advertisement information of each product, and by mainstream industry product category in institute
The corresponding scene of optimal displaying time for stating the advertisement information of each product carries out graded class mark to each product;It obtains
The publicity product that user selects from the product for including in the video file is played to the publicity production in the video file
When scene where the mark of product, the advertisement information of the publicity product is added to the field where the mark of the publicity product
It is shown in scape.Based on the above method and device, the application efficiency of video can be improved.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of flow diagram of advertisement information adding method provided in an embodiment of the present invention;
Fig. 2 is Inception V3 module diagrams provided in an embodiment of the present invention;
Fig. 3 is the network schematic diagram of Inception V3 provided in an embodiment of the present invention;
Fig. 4 is that the method flow for the product that the image in a kind of identification video file provided in an embodiment of the present invention includes shows
It is intended to;
Fig. 5 is the basic structure schematic diagram of Open CV main bodys provided in an embodiment of the present invention;
Fig. 6 is the video detection principle schematic of the moving object provided in an embodiment of the present invention based on OpenCV;
Fig. 7 is the schematic diagram of SSD object detecting methods provided in an embodiment of the present invention;
Fig. 8 is a kind of structural schematic diagram of advertisement information adding set disclosed by the embodiments of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
Attached drawing 1 is please referred to, Fig. 1 is a kind of flow diagram of advertisement information adding method provided in an embodiment of the present invention,
This method specifically comprises the following steps:
Step S100 carries out image recognition to video file and generates image recognition result, and described image recognition result includes
Time of occurrence point of the product and each product for including in the video file in the video file;
The product that image in the step, including in identification video file includes.Figure in the identification video file
As comprising product before, further include:Depth is carried out for image data collection using Google Inception V3 algorithms
It practises, obtains image classification model.Depth is carried out for image data collection using Google inception V3 algorithms described
During habit, further include:Deep learning model is improved based on Annotator Open Images image data collection.
Inception is the CNN models that Google increases income, and has disclosed four versions so far, each version is base
Data in large-scale image data base ImageNet are trained.Therefore we can directly utilize Google's
Inception models realize image classification.Based on Inception V3 models.Inception V3 models are about
25000000 parameters, as soon as classification image has just used 5,000,000,000 multiply-add instruction, one image of classification that can complete in an instant.
Inception V3 module diagrams are specifically as shown in Figure 2.The network of Inception V3 is specifically as shown in Figure 3.
Step S110 determines each production according to time of occurrence point of each product in the video file
The optimal displaying time of the advertisement information of product, and by mainstream industry product category each product advertisement information it is optimal
Show that time corresponding scene carries out graded class mark to each product;
Mainstream industry product category includes 11 industries totally 28 product categories, specific as follows:
Automobile:【SUV】、【MPV】、【Car】、【Sport car】、【Other vehicles】
Electronic appliance:【Mobile phone and its accessory】、【Household electrical appliance】、【Photographic goods】
IT industry:【Computer】、【Software】
Cosmetics:【Personal care and toiletry】、【Cosmetic product】
Daily necessities:【Articles for washing】、【Other daily necessities】
Drinks:【Beer】、【Red wine】、【White wine】、【Fruit wine】、【Other drinks】
Food and drink:【Food】、【Drink】
Medicine company:【Cold drug】、【Dermic】
House property:【Intermediary】
Food and drink:【Convenience store】、【Eating and drinking establishment】
Fashion and Accessories:【Clothes】、【Ornaments】
Step S120 obtains the publicity product that user selects from the product for including in the video file, is regarded described
When frequency file is played to the scene where the mark of the publicity product, the advertisement information of the publicity product is added to described
It is shown in scene where the mark of publicity product.
By it is described publicity product advertisement information be added to it is described publicity product mark where scene in after, you can
The user that the video is watched in guiding clicks the advertisement information for checking the publicity product.
The advertisement information is specifically as follows intention pressure set of hanging scrolls advertisement.
A kind of advertisement information adding method is disclosed in the present embodiment, carrying out image recognition to video file generates image knowledge
Not as a result, described image recognition result includes the product for including and each product in the video file in the video file
In time of occurrence point;According to time of occurrence point of each product in the video file, each product is determined
Advertisement information the optimal displaying time, and by mainstream industry product category the advertisement information of each product optimal exhibition
Show that time corresponding scene carries out graded class mark to each product;Obtain the production that user includes from the video file
The publicity product selected in product will be described in the scene where the mark that the video file is played to the publicity product
Publicity product advertisement information be added to it is described publicity product mark where scene in be shown.Based on the above method,
The application efficiency of video can be improved.
Attached drawing 4 is please referred to, Fig. 4 is a kind of production for identifying the image in video file and including disclosed in the embodiment of the present invention
The method flow schematic diagram of product, this method specifically include:
The video is extracted and preserved based on the edge detection algorithm for the computer vision library Open CV that increase income in step S200
Key frame of video in file;
Frame is exactly the single width image frame of least unit in animation, each lattice camera lens being equivalent on cinefilm.Dynamic
It draws frame on the time shaft of software and shows as a lattice or a label.Key frame, the original painting being equivalent in 2 D animation, refer to role or
That frame residing for key operations in person's object of which movement or variation.
The full name of OpenCV is:Open Source Computer Vision Library are one and are permitted based on BSD
The cross-platform computer vision library of (increasing income) distribution, transplantability and versatility are high, can run on Linux, Windows and
Multiple operating systems such as Mac OS.It is made of the programming language of its exploitation many functions and a small amount of class, and in order to carry
Its high versatility, provides the interface of the programming softwares language such as Python, Ruby, MATLAB, realizes image procossing and calculating
Many general-purpose algorithms of machine visual aspects, to which more perfect analyzing processing image and the general-purpose algorithm for completing many are used for
Computer intelligence visual aspects.The basic structure of Open CV main bodys is as shown in Figure 5.
In OpenCV, main picture format to be used is IplImage, and structure is defined as follows:
Moving object detection is the first part of video frequency motion target detection and tracking, it is exactly in real time monitored
Moving target is detected in scene, and is extracted.Commonly there are four types of common methods for moving object detection:Continuous frame-to-frame differences
Point-score, background subtraction, optical flow method and kinergety method.The video detection principle of the wherein moving object based on OpenCV is main
It is the certain characteristic informations, such as profile, color or shape etc. according to target object, these is utilized in complicated Background
Information carries out target mobile object to isolate background image.Fig. 6 is the movement based on OpenCV shown in the embodiment of the present invention
The video detection principle of object.
For from extracting target from images object, essence is exactly the detection for some contour of object, is then divided
Process.Entire extraction process is exactly to be showed the difference of every frame image in fact.
Step S210 screens the key frame of video using RandomForest algorithms;
Random Forest algorithms are screened and are cleaned to key frame.Random Forest are called random forest calculation
Method is a grader for including multiple decision trees in machine learning, and the classification of its output is by setting output individually
Depending on the mode of classification.
Random forests algorithm realizes that substantially flow is as follows:
1) it is concentrated with from sample and puts back to stochastical sampling and select n sample;
2) k feature is randomly choosed from all features, and decision tree is established (generally using these features to the sample selected
It is CART, can also be other or mixing);
3) it repeats above two step m times, that is, generates m decision tree, form random forest;
4) for new data, by each tree decision, which kind of confirmation of making the final vote assigns to.
Step S220, according to image classification model, using Detector SSD algorithms to the Video Key after screening
Frame is identified, and determines the product for including in the key frame of video after screening.
Processing and classification is identified to key frame picture in Detector SSD.SSD is a kind of depth based on regression algorithm
Convolutional neural networks object detecting method is spent, Fig. 7 is the schematic diagram of the SSD object detecting methods shown in the embodiment of the present invention, such as
It is each position assessment on 8x8 or 4x4 characteristic patterns for size when SSD networks are to input picture process of convolution shown in Fig. 7
Go out the small set acquiescence frame of different length-width ratios.For each acquiescence frame, shaped Offset and confidence to all object type are predicted
Degree.In training, these acquiescence frames are matched to true tag regional frame first.For example, two acquiescence frames are matched to cat and dog,
These frames are just that remaining is considered as negative.Model loss is the weighted sum between position loss and confidence loss.
SSD methods be based on feedforward convolutional neural networks, generate fixed size regional frame set and regional frame in object
Then the score of classification utilizes non-maxima suppression step W to generate final detection.
RPN scoring mechanisms in Faster R-CNN are combined by SSD with the recurrence thought in YOLO, use entire image
The multiple dimensioned provincial characteristics of each position is returned, and not only with the fast characteristic of detection speed, but also can be increased substantially
The precision of regional frame prediction.
Attached drawing 8 is please referred to, Fig. 8 is a kind of structural schematic diagram of advertisement information adding set disclosed by the embodiments of the present invention,
The device includes:
Picture recognition module 10 generates image recognition result for carrying out image recognition to video file, and described image is known
Other result includes the time of occurrence point of the product for including and each product in the video file in the video file;
Sifting sort module 11 is determined for the time of occurrence point according to each product in the video file
The optimal displaying time of the advertisement information of each product, and press publicity of the mainstream industry product category in each product
The corresponding scene of optimal displaying time of information carries out graded class mark to each product.
Advertisement information putting module 12, the publicity selected from the product for including in the video file for obtaining user
Product, in the scene where the mark that the video file is played to the publicity product, by the publicity of the publicity product
Information be added to it is described publicity product mark where scene in be shown.
Optionally, described image identification module is specifically used for:
Identify the product that the image in the video file includes.
Optionally, described device further includes:
Image classification model acquisition module, for it is described identification video file in image include product before, adopt
Deep learning is carried out for image data collection with Google Inception V3 algorithms, obtains image classification model.
Optionally, described image disaggregated model acquisition module, is specifically used for:
During the use Google inception V3 algorithms carry out deep learning for image data collection,
Deep learning model is improved based on Annotator Open Images image data collection.
Optionally, described image identification module is specifically used for:
Based on the edge detection algorithm for the computer vision library Open CV that increase income, it is extracted and preserved in the video file
Key frame of video;
The key frame of video is screened using Random Forest algorithms;
According to image classification model, the key frame of video after screening is known using Detector SSD algorithms
Not, the product for including in the key frame of video after screening is determined.
In summary:
A kind of advertisement information adding method and device are disclosed in the embodiment of the present invention, and image recognition is carried out to video file
Image recognition result is generated, described image recognition result includes the product for including and each product in the video file in institute
State the time of occurrence point in video file;According to time of occurrence point of each product in the video file, institute is determined
State the optimal displaying time of the advertisement information of each product, and by mainstream industry product category each product circular letter
The corresponding scene of optimal displaying time of breath carries out graded class mark to each product;User is obtained from the video file
In include product in the publicity product that selects, the scene where the mark that the video file is played to the publicity product
When, by it is described publicity product advertisement information be added to it is described publicity product mark where scene in be shown.It is based on
The above method and device can improve the application efficiency of video.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with other
The difference of embodiment, just to refer each other for identical similar portion between each embodiment.For device disclosed in embodiment
For, since it is corresponded to the methods disclosed in the examples, so description is fairly simple, related place is said referring to method part
It is bright.
Professional further appreciates that, unit described in conjunction with the examples disclosed in the embodiments of the present disclosure
And algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and
The interchangeability of software generally describes each exemplary composition and step according to function in the above description.These
Function is implemented in hardware or software actually, depends on the specific application and design constraint of technical solution.Profession
Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered
Think beyond the scope of this invention.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor
The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit
Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology
In any other form of storage medium well known in field.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention.
Various modifications to these embodiments will be apparent to those skilled in the art, as defined herein
General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention
It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one
The widest range caused.
Claims (10)
1. a kind of advertisement information adding method, which is characterized in that including:
Image recognition is carried out to video file and generates image recognition result, described image recognition result includes in the video file
Including time of occurrence point in the video file of product and each product;
According to time of occurrence point of each product in the video file, the advertisement information of each product is determined
The optimal displaying time, and it is corresponding in the optimal displaying time of the advertisement information of each product by mainstream industry product category
Scene carries out graded class mark to each product;
The publicity product that user selects from the product for including in the video file is obtained, institute is played in the video file
State publicity product mark where scene when, by it is described publicity product advertisement information be added to it is described publicity product mark
It is shown in the scene at place.
2. according to the method described in claim 1, it is characterized in that, described carry out image recognition generation image knowledge to video file
Not as a result, including:
The product that image in identification video file includes.
3. according to the method described in claim 2, it is characterized in that, the product that the image in the identification video file includes
Before, further include:
Deep learning is carried out for image data collection using Google Inception V3 algorithms, obtains image classification model.
4. according to the method described in claim 3, it is characterized in that, it is described using Google inception V3 algorithms for
During image data collection carries out deep learning, further include:
Deep learning model is improved based on Annotator Open Images image data collection.
5. according to the method described in claim 2, it is characterized in that, the image product that includes in the identification video file,
It specifically includes:
Based on the edge detection algorithm for the computer vision library Open CV that increase income, the video in the video file is extracted and preserved
Key frame;
The key frame of video is screened using Random Forest algorithms;
According to image classification model, the key frame of video after screening is identified using Detector SSD algorithms, really
The product for including in the key frame of video after fixed screening.
6. a kind of advertisement information adding set, which is characterized in that including:
Picture recognition module generates image recognition result, described image recognition result for carrying out image recognition to video file
Including the time of occurrence point in the video file of product and each product for including in the video file;
Sifting sort module determines described each for the time of occurrence point according to each product in the video file
Optimal displaying time of the advertisement information of a product, and by mainstream industry product category each product advertisement information
Optimal displaying time corresponding scene carries out graded class mark to each product;
Advertisement information putting module, the publicity product selected from the product for including in the video file for obtaining user,
In the scene where the mark that the video file is played to the publicity product, the advertisement information of the publicity product is added
Add to it is described publicity product mark where scene in be shown.
7. device according to claim 6, which is characterized in that described image identification module is specifically used for:
Identify the product that the image in the video file includes.
8. device according to claim 7, which is characterized in that further include:
Image classification model acquisition module, for it is described identification video file in image include product before, use
Google Inception V3 algorithms carry out deep learning for image data collection, obtain image classification model.
9. device according to claim 8, which is characterized in that described image disaggregated model acquisition module is specifically used for:
During the use Google inception V3 algorithms carry out deep learning for image data collection, it is based on
Annotator Open Images image data collection improves deep learning model.
10. device according to claim 7, which is characterized in that described image identification module is specifically used for:
Based on the edge detection algorithm for the computer vision library Open CV that increase income, the video in the video file is extracted and preserved
Key frame;
The key frame of video is screened using Random Forest algorithms;
According to image classification model, the key frame of video after screening is identified using Detector SSD algorithms, really
The product for including in the key frame of video after fixed screening.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810200729.1A CN108471543A (en) | 2018-03-12 | 2018-03-12 | A kind of advertisement information adding method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810200729.1A CN108471543A (en) | 2018-03-12 | 2018-03-12 | A kind of advertisement information adding method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108471543A true CN108471543A (en) | 2018-08-31 |
Family
ID=63265144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810200729.1A Pending CN108471543A (en) | 2018-03-12 | 2018-03-12 | A kind of advertisement information adding method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108471543A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112040256A (en) * | 2020-08-14 | 2020-12-04 | 华中科技大学 | Live broadcast experiment teaching process video annotation method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1728781A (en) * | 2004-07-30 | 2006-02-01 | 新加坡科技研究局 | Method and apparatus for insertion of additional content into video |
CN101790049A (en) * | 2010-02-25 | 2010-07-28 | 深圳市茁壮网络股份有限公司 | Newscast video segmentation method and system |
CN104715023A (en) * | 2015-03-02 | 2015-06-17 | 北京奇艺世纪科技有限公司 | Commodity recommendation method and system based on video content |
CN104811744A (en) * | 2015-04-27 | 2015-07-29 | 北京视博云科技有限公司 | Information putting method and system |
CN106127106A (en) * | 2016-06-13 | 2016-11-16 | 东软集团股份有限公司 | Target person lookup method and device in video |
CN106792004A (en) * | 2016-12-30 | 2017-05-31 | 北京小米移动软件有限公司 | Content item method for pushing, apparatus and system |
-
2018
- 2018-03-12 CN CN201810200729.1A patent/CN108471543A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1728781A (en) * | 2004-07-30 | 2006-02-01 | 新加坡科技研究局 | Method and apparatus for insertion of additional content into video |
CN101790049A (en) * | 2010-02-25 | 2010-07-28 | 深圳市茁壮网络股份有限公司 | Newscast video segmentation method and system |
CN104715023A (en) * | 2015-03-02 | 2015-06-17 | 北京奇艺世纪科技有限公司 | Commodity recommendation method and system based on video content |
CN104811744A (en) * | 2015-04-27 | 2015-07-29 | 北京视博云科技有限公司 | Information putting method and system |
CN106127106A (en) * | 2016-06-13 | 2016-11-16 | 东软集团股份有限公司 | Target person lookup method and device in video |
CN106792004A (en) * | 2016-12-30 | 2017-05-31 | 北京小米移动软件有限公司 | Content item method for pushing, apparatus and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112040256A (en) * | 2020-08-14 | 2020-12-04 | 华中科技大学 | Live broadcast experiment teaching process video annotation method and system |
CN112040256B (en) * | 2020-08-14 | 2021-06-11 | 华中科技大学 | Live broadcast experiment teaching process video annotation method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633204B (en) | Face occlusion detection method, apparatus and storage medium | |
Aguilar et al. | Grab, pay, and eat: Semantic food detection for smart restaurants | |
CN107993191B (en) | Image processing method and device | |
CN107808143B (en) | Dynamic gesture recognition method based on computer vision | |
TWI746674B (en) | Type prediction method, device and electronic equipment for identifying objects in images | |
CN107808120B (en) | Glasses localization method, device and storage medium | |
Koo et al. | Image recognition performance enhancements using image normalization | |
CN111027493B (en) | Pedestrian detection method based on deep learning multi-network soft fusion | |
EP3477549A1 (en) | Computer vision architecture with machine learned image recognition models | |
CN109145766B (en) | Model training method and device, recognition method, electronic device and storage medium | |
CN110956060A (en) | Motion recognition method, driving motion analysis method, device and electronic equipment | |
CN107679448A (en) | Eyeball action-analysing method, device and storage medium | |
US9575566B2 (en) | Technologies for robust two-dimensional gesture recognition | |
CN109657537A (en) | Image-recognizing method, system and electronic equipment based on target detection | |
CN110147483A (en) | A kind of title method for reconstructing and device | |
CN110222582B (en) | Image processing method and camera | |
CN106897659A (en) | The recognition methods of blink motion and device | |
CN107862322B (en) | Method, device and system for classifying picture attributes by combining picture and text | |
CN110097616B (en) | Combined drawing method and device, terminal equipment and readable storage medium | |
CN107633205A (en) | lip motion analysis method, device and storage medium | |
CN104077597B (en) | Image classification method and device | |
CN110298380A (en) | Image processing method, device and electronic equipment | |
CN109034012A (en) | First person gesture identification method based on dynamic image and video sequence | |
CN109902541A (en) | A kind of method and system of image recognition | |
WO2019142127A1 (en) | Method and system of creating multiple expression emoticons |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180831 |