CN107665261A - Video duplicate checking method and device - Google Patents
Video duplicate checking method and device Download PDFInfo
- Publication number
- CN107665261A CN107665261A CN201711008924.6A CN201711008924A CN107665261A CN 107665261 A CN107665261 A CN 107665261A CN 201711008924 A CN201711008924 A CN 201711008924A CN 107665261 A CN107665261 A CN 107665261A
- Authority
- CN
- China
- Prior art keywords
- video
- key frame
- checked
- frequency
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Abstract
The embodiments of the invention provide video duplicate checking method, this method includes:The key frame to be checked for paying attention to frequency is extracted according to preset rules, then each key frame is inputted into default Feature Selection Model, obtain depth characteristic corresponding to each key frame difference, then depth characteristic corresponding to each key frame difference is subjected to characteristics of image pond, obtains depth characteristic corresponding to each key frame difference after pondization processing;Then by the way that to each key frame after pondization processing, corresponding depth characteristic is integrated and encoded respectively, obtain the characteristic information of attention frequency to be checked, then the characteristic information of attention frequency to be checked is post-processed by following at least one processing mode, the characteristic information of attention frequency to be checked after being post-processed, processing mode include:Feature Dimension Reduction processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video duplicate checking.Video duplicate checking method and device provided in an embodiment of the present invention is applied to carry out duplicate checking to video.
Description
Technical field
The present invention relates to multimedia technology field, and specifically, the present invention relates to a kind of video duplicate checking method and device.
Background technology
With the development of information technology, multimedia technology also develops therewith, and various types of video websites are arisen at the historic moment, and one
A little users or website webmaster will often upload some videos to the website, so that other users download and check.
Therefore, website can receive substantial amounts of uploaded videos, but it is palinopsia to have many videos in these uploaded videos
Frequency is the very high video of similarity, when website carries out ranking according to video-see amount to video, during recommending user, by
In the presence of the video that a large amount of repetition videos or similarity are very high in these videos, the degree of accuracy of the website to video ranking will be caused
It is relatively low, and the degree of accuracy for recommending the video of user is relatively low, and due to exist in these videos it is a large amount of repeat videos or
The very high video of person's similarity, also it is unfavorable for user's lookup and watches these videos, so as to cause the Experience Degree of user relatively low.
The content of the invention
To overcome above-mentioned technical problem or solving above-mentioned technical problem at least in part, spy proposes following technical scheme:
Embodiments of the invention are according on one side, there is provided a kind of video duplicate checking method, including:
The key frame to be checked for paying attention to frequency is extracted according to preset rules;
Each key frame is inputted into default Feature Selection Model, obtains depth characteristic corresponding to each key frame difference;
Depth characteristic corresponding to each key frame difference is subjected to characteristics of image pond, obtains each pass after pondization processing
Depth characteristic corresponding to key frame difference;
By the way that to each key frame after pondization processing, corresponding depth characteristic is integrated and encoded respectively, obtain to be checked
Pay attention to the characteristic information of frequency;
The characteristic information of attention frequency to be checked is post-processed by following at least one processing mode, after obtaining post processing
Attention frequency to be checked characteristic information, processing mode includes:Feature Dimension Reduction processing;Decorrelative transformation.
According to the characteristic information of the attention frequency to be checked after post processing, video duplicate checking is carried out.
Wherein, default Feature Selection Model to depth convolutional neural networks by training to obtain.
Further, each key frame is inputted into default Feature Selection Model, obtained corresponding to each key frame difference
Before the step of depth characteristic, in addition to:
Each key frame is subjected to image preprocessing respectively, image preprocessing includes at least one of following:At regular size
Reason and picture whitening processing;
Wherein, each key frame is inputted into default Feature Selection Model, obtains depth corresponding to each key frame difference
The step of feature, including:
Each key frame after image preprocessing is inputted into default Feature Selection Model, it is right respectively to obtain each key frame
The depth characteristic answered.
Specifically, according to the characteristic information of the attention frequency to be checked after post processing, the step of carrying out video duplicate checking, including:
According to the characteristic information of the attention frequency to be checked after post processing, and Product is quantified by product
Quantization, determine the video features index of attention frequency to be checked;
Indexed according to the video features of attention frequency to be checked, carry out video duplicate checking.
Specifically, the mode of video duplicate checking, including:
Video features index is with the presence or absence of identical corresponding to judging each video respectively;
If identical video features index be present, it is determined that each video corresponding to identical video features index repeats.
Further, in addition to:From each video repeated, video to be deleted is determined, and it is to be deleted to delete this
Video.
Embodiments of the invention additionally provide a kind of device of video duplicate checking according on the other hand, including:
Abstraction module, for extracting the key frame to be checked for paying attention to frequency according to preset rules;
Input module, each key frame for abstraction module to be extracted input default Feature Selection Model, obtain each
Depth characteristic corresponding to individual key frame difference;
Characteristics of image pond module, for depth characteristic corresponding to each key frame difference to be carried out into characteristics of image pond,
Obtain depth characteristic corresponding to each key frame difference after pondization processing;
Coding module is integrated, for by being corresponded to respectively to each key frame after the processing of characteristics of image pond module pondization
Depth characteristic integrated and encoded, obtain it is to be checked attention frequency characteristic information
Post-processing module, for locating after the characteristic information of attention frequency to be checked is carried out by following at least one processing mode
Reason, the characteristic information of the attention frequency to be checked after being handled, processing mode include:Feature Dimension Reduction processing;Decorrelative transformation.
Video duplicate checking module, for the characteristic information of the attention frequency to be checked after being post-processed according to post-processing module, depending on
Frequency duplicate checking.
Wherein, default Feature Selection Model to depth convolutional neural networks by training to obtain.
Further, device also includes:Image pre-processing module;
Image pre-processing module, for each key frame to be carried out into image preprocessing respectively, image preprocessing includes following
At least one of:Regular size processing and picture whitening processing;
Input module, specifically for each key frame after image preprocessing is inputted into default Feature Selection Model, obtain
To depth characteristic corresponding to each key frame difference.
Specifically, video duplicate checking module includes:Determining unit, video duplicate checking unit;
Determining unit, for the characteristic information according to the attention frequency to be checked after post processing, and Product is quantified by product
Quantization, determine the video features index of attention frequency to be checked;
Video duplicate checking unit, for the video features index of the attention frequency to be checked determined according to determining unit, carry out video
Duplicate checking.
Specifically, video duplicate checking module, specifically for judging whether each video respectively deposit by corresponding video features index
Identical;
Video duplicate checking module, specifically it is additionally operable to, when identical video features index be present, determine identical video features
Each video repeats corresponding to index.
Further, the device also includes:Determining module, removing module;
Determining module, for from each video repeated, determining video to be deleted;
Removing module, for deleting the video to be deleted.
Embodiments of the invention additionally provide a kind of computer-readable recording medium, computer can according to another aspect
Read to be stored with computer program in storage medium, the program realizes above-mentioned video duplicate checking method when being executed by processor.
Embodiments of the invention additionally provide a kind of computing device according to another aspect, including:Processor, memory,
Communication interface and communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes the side of the above-mentioned video duplicate checking of computing device
Operated corresponding to method.
The invention provides a kind of video duplicate checking method and device, the present invention extracts to be checked pay attention to frequently according to preset rules
Key frame, each key frame is then inputted into default Feature Selection Model, obtain each key frame respectively corresponding to depth
Feature, depth characteristic corresponding to each key frame difference is then subjected to characteristics of image pond, obtained each after pondization processing
Depth characteristic corresponding to key frame difference;Then by the way that to each key frame after pondization processing, corresponding depth characteristic is entered respectively
Row integrate and coding, obtain it is to be checked attention frequency characteristic information, then by it is to be checked attention frequency characteristic information by it is following at least
A kind of processing mode is post-processed, and the characteristic information of the attention frequency to be checked after being post-processed, processing mode includes:Feature drops
Dimension processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video duplicate checking.That is this hair
It is bright by video carry out duplicate checking, such as the video to having uploaded carry out duplicate checking, can determine the repetition in the video uploaded
Video or the very high video of similarity, so as to improve the degree of accuracy of the website to video ranking, and due to having uploaded
Video carry out duplicate checking, therefore reduce the probability of repetition video and the higher video of similarity, when user searches video,
Required video can be more accurately found, and then the Experience Degree of user can be lifted.
The additional aspect of the present invention and advantage will be set forth in part in the description, and these will become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is a kind of video duplicate checking method flow chart of the embodiment of the present invention;
Fig. 2 is the schematic diagram in three kinds of progress ponds of the embodiment of the present invention;
Fig. 3 is a kind of apparatus structure schematic diagram of video duplicate checking in the embodiment of the present invention;
Fig. 4 is the apparatus structure schematic diagram of another video duplicate checking in the embodiment of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one
It is individual ", " described " and "the" may also comprise plural form.It is to be further understood that what is used in the specification of the present invention arranges
Diction " comprising " refer to the feature, integer, step, operation, element and/or component be present, but it is not excluded that in the presence of or addition
One or more other features, integer, step, operation, element, component and/or their groups.It should be understood that when we claim member
Part is " connected " or during " coupled " to another element, and it can be directly connected or coupled to other elements, or there may also be
Intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.It is used herein to arrange
Taking leave "and/or" includes whole or any cell and all combinations of one or more associated list items.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific terminology), there is the general understanding identical meaning with the those of ordinary skill in art of the present invention.Should also
Understand, those terms defined in such as general dictionary, it should be understood that have with the context of prior art
The consistent meaning of meaning, and unless by specific definitions as here, idealization or the implication of overly formal otherwise will not be used
To explain.
Those skilled in the art of the present technique are appreciated that " terminal " used herein above, " terminal device " both include wireless communication
The equipment of number receiver, it only possesses the equipment of the wireless signal receiver of non-emissive ability, includes receiving again and transmitting hardware
Equipment, its have on bidirectional communication link, can carry out two-way communication reception and launch hardware equipment.This equipment
It can include:Honeycomb or other communication equipments, it has single line display or multi-line display or shown without multi-line
The honeycomb of device or other communication equipments;PCS (Personal Communications Service, PCS Personal Communications System), it can
With combine voice, data processing, fax and/or its communication ability;PDA (Personal Digital Assistant, it is personal
Digital assistants), it can include radio frequency receiver, pager, the Internet/intranet access, web browser, notepad, day
Go through and/or GPS (Global Positioning System, global positioning system) receiver;Conventional laptop and/or palm
Type computer or other equipment, its have and/or the conventional laptop including radio frequency receiver and/or palmtop computer or its
His equipment." terminal " used herein above, " terminal device " they can be portable, can transport, installed in the vehicles (aviation,
Sea-freight and/or land) in, or be suitable for and/or be configured in local runtime, and/or with distribution form, operate in the earth
And/or any other position operation in space." terminal " used herein above, " terminal device " can also be communication terminal, on
Network termination, music/video playback terminal, such as can be PDA, MID (Mobile Internet Device, mobile Internet
Equipment) and/or mobile phone or the equipment such as intelligent television, set top box with music/video playing function.
Embodiment one
The invention provides a kind of video duplicate checking method, as shown in figure 1, including:
Step 101, the key frame according to preset rules extraction attention frequency to be checked.
For the embodiment of the present invention, attention frequency to be checked is cut, is cut into multiple images frame, and every preset time
A picture frame is extracted as key frame.
For example, key frame of the picture frame as attention frequency to be checked was extracted from attention frequency to be checked every 2 seconds.
For the embodiment of the present invention, can be in the way of preset rules extract key frame from attention frequency to be checked:Obtain
Video frame images are taken, then extract the multidimensional characteristic of image, initial fuzzy C-means clustering is carried out to described image, degree of membership is maximum
Sample as cluster centre, thus select similar with cluster centre and inhomogeneous neighbour's sample, pass through ReliefF algorithms
The weights of each dimensional feature are calculated, carry out secondary cluster after being weighted to every dimensional feature again, obtain final cluster result, are selected
The nearest frame of the distance-like heart can obtain key frame.
Further, if the video frame images obtained amount to N width, sample set to be clustered is designated as S={ X1, X2...,
XN, extraction image XjM dimensional features (the x of (j=1,2 ..., N)1, x2..., xM);Initial fuzzy C-means clustering then is carried out to described image
Method be:
Step 1, according to the number of video sequence, according to 20:1 size initialization classification number K, while it is manually set weighting
Exponent m, random initializtion subordinated-degree matrix, determine degree of membership μ of j-th of image pattern to the i-th classij, each sample is to all
The degree of membership sum of pattern class should be 1, iterations L;
Step 2, cluster centre of all categories is calculated(i=1,2 ...,
K);
Step 3, new subordinated-degree matrix μ is calculatedij(L+1),
Wherein d (Xj, Zi) when representing that last iteration is completed, image pattern XjWith cluster centre ZiEuclidean distance;
Step 4, if | μij(L+1)-μij(L) |≤ε, algorithm reach convergence, criterion functionReach minimum, then stop cluster, otherwise go to step 2, ε is people
For defined threshold parameter.
Further, after carrying out initial fuzzy C-means clustering to described image, clustering result is obtained, selects degree of membership most
Big sample XrAs the center of each classification, for the sample of any one class center, thus find similar with the sample
And inhomogeneous q sample, for certain dimensional feature A, it is updated according to ReliefF algorithm weights more new formula
, wherein, diff (A, R1, R2) represents differences of two sample R1 and R2 on feature A, HjRepresent j-th of sample in similar sample
This, Mj(C) expression and XrJ-th of neighbour's sample in different classes of C, P (C) represent the probability that C classes occur, and C classes represent and XrNo
Similar classification, obtained by the sample number of C classes and the ratio calculation of total sample number, similarly, P (class (Xr)) be and XrSimilar
The ratio of sample number and total sample number,L tables
Show that repetition updates l times, thus obtain the weights per one-dimensional characteristic in feature set.
Further, each dimensional feature is weighted using obtained weights, according to the initial fuzzy C-means clustering
Method carries out secondary cluster, then calculates Euclidean distance formula of the distance between the sample using weightingWherein, xkRepresent sample XjK-th of attributive character value, zkRepresent sample
This ZiK-th of attributive character value, λkRepresent the size of k-th of characteristic attribute weights.
Further, according to secondary clustering result, the sample that degree of membership is maximum in each classification is selected as cluster
The image at center is key frame.
Step 102, each key frame inputted into default Feature Selection Model, obtain each key frame respectively corresponding to it is deep
Spend feature.
Wherein, default Feature Selection Model to depth convolutional neural networks by training to obtain.
For example, by 2,000 ten thousand material image, totally 21000 classifications are trained to the depth convolutional Neural, and it is pre- to obtain this
If Feature Selection Model.
For the embodiment of the present invention, by the depth convolutional neural networks after the input training of each key frame, it is each to obtain this
Probability in key frame in 21000 classifications belonging to each key frame in every class;Or export and dimension is preset corresponding to the key frame
Sign, the sign can be used for characterizing application scenarios corresponding to the two field picture, for example, the indoor and outdoor, sun and sky
Deng.
Step 103, by each key frame respectively corresponding to depth characteristic carry out characteristics of image pond, after obtaining pondization processing
Each key frame respectively corresponding to depth characteristic.
For the embodiment of the present invention, pond is on the basis of convolution feature extraction, and each convolution feature is made even
Equalization, continue to zoom out concealed nodes for convolution intrinsic dimensionality.
For the embodiment of the present invention, an image-region useful feature is very likely equally applicable in another region.
Therefore, in order to describe big image, one naturally idea be exactly that aggregate statistics are carried out to the feature of diverse location, for example,
People can calculate the average value (or maximum) of some special characteristic on one region of image.These summary statistics features are not
Only there is much lower dimension (comparing using all features extracted and obtained), while can also improve result and (be not easy plan
Close).The operation of this polymerization is just called pond (pooling).
For the embodiment of the present invention, pondization can include:1) mean-pooling, i.e., characteristic point in neighborhood is onlyed demand flat
, background is retained more preferable;Max-pooling, i.e., maximum is taken to characteristic point in neighborhood, it is more preferable to texture blending;3)
Stochastic-pooling, fall between, by assigning probability according to numerical values recited to pixel, enter according still further to probability
Row sub-sampling.
Wherein, the error of feature extraction is essentially from two aspects:(1) estimate variance increases caused by Size of Neighborhood is limited
Greatly;(2) convolutional layer parameter error causes to estimate the skew of average.In general, mean-pooling can reduce the first error,
More background informations for retaining image, max-pooling can reduce second of error, more retain texture information.Flat
It is approximate with mean-pooling in equal meaning, on local sense, then obey max-pooling criterion.It is wherein above-mentioned three kinds
The mode in pond is as shown in Figure 2.
Step 104, by each key frame after handling pondization, corresponding depth characteristic is integrated and encoded respectively,
Obtain the characteristic information of attention frequency to be checked.
For example, in step 101 from it is to be checked attention frequency in extract three key frames, respectively key frame 1, key frame 2 and
Key frame 3, then the characteristic information according to corresponding to key frame 1, key frame 2 and key frame 3 difference, determines regarding for attention frequency to be checked
Frequency information.
Step 105, by it is to be checked attention frequency characteristic information post-processed by following at least one processing mode, obtain
The characteristic information of attention frequency to be checked after post processing.
Wherein, processing mode includes:Feature Dimension Reduction processing;Decorrelative transformation.
For the embodiment of the present invention, the characteristic information of attention frequency to be checked is subjected to Feature Dimension Reduction by default dimension-reduction algorithm
Processing.Wherein, this feature algorithm includes:Constituent analysis (principal component analysis, PCA), factorial analysis
(Factor Analysis) and independent component analysis (Independent Component Analysis, ICA).In this hair
In bright embodiment, exemplified by PCA, dimension-reduction treatment is carried out to the characteristic information to be checked for paying attention to frequency.Wherein, PCA is a kind of base
In variable covariances logm according to be compressed dimensionality reduction, denoising effective ways, PCA thought is that n dimensional features are mapped into k
(k in dimension<N), this k dimensional feature is referred to as pivot, is the linear combination of old feature, and these linear combinations maximize sample variance, to the greatest extent
Amount makes k new feature orthogonal.
For example, can be handled by Feature Dimension Reduction, the characteristic information matrix dimensionality reduction 400 of 10000 dimensions is tieed up.
For the embodiment of the present invention, relevance be present between the characteristics of image of adjacent dimension, when the image for not needing adjacent dimension
During relevance between feature, the characteristic information for treating duplicate checking video carries out decorrelative transformation.In embodiments of the present invention, pass through
By the characteristic information of attention frequency to be checked by Feature Dimension Reduction processing and decorrelative transformation, so as to obtain the feature of attention frequency to be checked
The dimension of information is relatively low, and interference is relatively low.
Step 106, the characteristic information according to the attention frequency to be checked after post processing, carry out video duplicate checking.
For the embodiment of the present invention, by the characteristic information of attention frequency to be checked, it is determined that whether there is in the video reached the standard grade
The higher video with the characteristic information degree of association of the attention frequency to be checked, to realize video duplicate checking.
The embodiments of the invention provide a kind of video duplicate checking method, the embodiment of the present invention extracts to be checked according to preset rules
Pay attention to the key frame of frequency, each key frame is then inputted into default Feature Selection Model, each key frame is obtained and corresponds to respectively
Depth characteristic, then by each key frame respectively corresponding to depth characteristic carry out characteristics of image pond, after obtaining pondization processing
Each key frame respectively corresponding to depth characteristic;Then by depth corresponding to each key frame difference after pondization processing
Feature is integrated and encoded, obtain it is to be checked attention frequency characteristic information, then by it is to be checked attention frequency characteristic information by with
Lower at least one processing mode is post-processed, and the characteristic information of the attention frequency to be checked after being post-processed, processing mode includes:
Feature Dimension Reduction processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video duplicate checking.
I.e. the embodiment of the present invention can determine what is uploaded by carrying out duplicate checking, such as the progress duplicate checking of the video to having uploaded to video
The very high video of repetition video or similarity in video, so as to improve the degree of accuracy of the website to video ranking, and
Due to carrying out duplicate checking to the video that has uploaded, therefore reduce the probability of repetition video and the higher video of similarity, when with
When video is searched at family, required video can be more accurately found, and then the Experience Degree of user can be lifted.
Embodiment two
The alternatively possible implementation of the embodiment of the present invention, on the basis of shown in embodiment one, in addition to implement
Operation shown in example two, wherein,
Also include before step 102:Each key frame is subjected to image preprocessing respectively, image preprocessing is included below extremely
One item missing:Regular size processing and picture whitening processing.
For the embodiment of the present invention, by each key frame being carried out respectively at regular size processing and picture albefaction
Reason, to improve the robustness of each key frame images.
For the embodiment of the present invention, it is regular to image progress by way of sampling that regular size processing is carried out to image
Size processing, for example, five bulks will be taken off in image, including is taken off respectively from the centre of the image and corner.
For the embodiment of the present invention, image be ultimately imaged can by environmental illumination intensity, object reflection, shooting camera etc. it is more because
The influence of element.In order to included in image those not by the constant information of ectocine, image albefaction is carried out to image
Processing.Typically in order to remove the influence of these factors, its pixel value is changed into zero-mean and unit variance by us.Therefore it is first
First pass through formula one and formula two calculates original-gray image P pixel average μ and variance yields δ 2.
Wherein formula one is:
Formula two is:
Then, each pixel value of original-gray image will be converted using μ and δ:For coloured image, three
Individual passage calculates μ and δ 2 respectively, then carries out pixel conversion respectively according to formula three.
Wherein formula three:
Step 102 specifically includes:Each key frame after image preprocessing is inputted into default Feature Selection Model, obtained
Depth characteristic corresponding to each key frame difference.
Embodiment three
The alternatively possible implementation of the embodiment of the present invention, on the basis of shown in embodiment one, in addition to implement
Operation shown in example three, wherein,
Step 106 includes:According to the characteristic information of the attention frequency to be checked after post processing, and Product is quantified by product
Quantization, determine the video features index of attention frequency to be checked;Indexed according to the video features of attention frequency to be checked, depending on
Frequency duplicate checking.
For the embodiment of the present invention, product, which quantifies Product Quantization, includes the amount of packets of two process features
The cartesian product process of change process and classification.Assuming that there is a data set, then K-means is by given class number K, target
Function is distance and minimum value of all samples to class center, iterate to calculate optimization object function, obtain K class center and each
Classification belonging to sample.Object function is constant, and the way that product quantifies is:
(1) data set is K classification, and each sample is represented in the form of a vector, dimension d, by vector's
Each component is divided into m groups.
(2) using all vector certain group component as data set, obtained using k-means algorithmsIndividual class center, fortune
M k-means algorithm of row, then every group hasIndividual class center, remembers thisIndividual class center is a set.
(3) cartesian product is done into m obtained above set, just obtains the class center of whole data set.
For the embodiment of the present invention, by the characteristic information of the attention frequency to be checked after processing, Product is quantified by product
Quantization, the video features index of attention frequency to be checked is obtained, wherein the video features index to be checked for paying attention to frequency is to treat
Corresponding relation between duplicate checking video and aspect indexing.
For example, attention frequency to be checked includes video 1, video 2 and video 3, respectively corresponding to index value be 001,002,
003, the index value of video 1, video 2 and video features corresponding to the difference of video 3 is 1,2,1.
Further, the mode of video duplicate checking, including:Judge whether each video respectively deposit by corresponding video features index
Identical;If identical video features index be present, it is determined that each video corresponding to identical video features index repeats.
For the embodiment of the present invention, if video features index corresponding to two videos difference is identical, characterizes the two and regard
Frequency is palinopsia frequency.
For example, attention frequency to be checked includes video 1, video 2 and video 3, respectively corresponding to index value be 001,002,
003, the index value of video 1, video 2 and video features corresponding to the difference of video 3 is 1,2,1, due to video 1 and video 2
The index value of corresponding video features is 1 (index value of video features is identical corresponding to two different videos) therefore video
1 and video 2 be palinopsia frequency.
For the embodiment of the present invention, from each video repeated, video to be deleted is determined, and it is to be deleted to delete this
Video.
For the embodiment of the present invention, if multiple repetition videos in the video reached the standard grade be present, video is repeated from the plurality of
Middle selection video to be deleted, and delete.
For the embodiment of the present invention, according to default principle, from each video repeated, video to be deleted is determined, its
In preset principle include it is at least one of following:The definition of video, the issuing time of video, the viewing amount of video, the point of video
The download of the amount of hitting and video.
For example, the video reached the standard grade includes the video of two repetitions, including:Video 1 and video 3, wherein video 1
Download is 100, and the download of video 2 is 1200, then video to be deleted is video 1.
For the embodiment of the present invention, by from each video repeated, determining video to be deleted, and delete this and wait to delete
The video removed, when video corresponding to user downloads from the video reached the standard grade, it can accurately determine and download to be downloaded
Video, so as to reduce the repetitive rate of video in video of having reached the standard grade, and then the degree of accuracy for searching video to be downloaded can be improved,
Lift the Experience Degree of user.
For the embodiment of the present invention, by from each video repeated, determining video to be deleted, and delete this and wait to delete
The video removed, it is possible to increase the degree of accuracy of ranking is carried out to online each video, furthermore, when recommending video for user,
Can be more accurately by video recommendations to user, and then Consumer's Experience can be lifted.
For the embodiment of the present invention, when not carrying out duplicate checking to video, if user is searched for by way of search key
During required video, website may by some repeat videos either the higher video recommendations of similarity to user or by this
The video of a little non-duplicate checkings carries out recommending user after ranking, so as to cause website to recommend video to user and video is carried out
The degree of accuracy of ranking is relatively low, and then causes the Experience Degree of user poor.In the embodiment of the present invention, by each being regarded from what is repeated
In frequency, determine video to be deleted, and delete the video to be deleted, searched for by way of search key as user needed for
During video, can would be more accurately be needed for user video recommendations recommend use to user, or by the ranking of associated video
Family, so as to lift the Experience Degree of user.
The embodiments of the invention provide a kind of device of video duplicate checking, as shown in figure 3, the device of the video duplicate checking includes:
Abstraction module 31, input module 32, characteristics of image pond module 33, integration coding module 34, video duplicate checking module 35, wherein,
Abstraction module 31, for extracting the key frame to be checked for paying attention to frequency according to preset rules.
Input module 32, each key frame for abstraction module 31 to be extracted input default Feature Selection Model, obtained
To depth characteristic corresponding to each key frame difference.
Wherein, default Feature Selection Model to depth convolutional neural networks by training to obtain.
Characteristics of image pond module 33, for depth characteristic corresponding to each key frame difference to be carried out into characteristics of image
Chi Hua, obtain depth characteristic corresponding to each key frame difference after pondization processing.
Coding module 34 is integrated, for by distinguishing each key frame after the processing of the pondization of characteristics of image pond module 33
Corresponding depth characteristic is integrated and encoded, and obtains the characteristic information of attention frequency to be checked.
Post-processing module 35, for the characteristic information to be checked for paying attention to frequency to be entered by following at least one processing mode
Row post processing, the characteristic information of the attention frequency to be checked after being handled.
Wherein, the processing mode includes:Feature Dimension Reduction processing;Decorrelative transformation.
Video duplicate checking module 36, the feature for the attention frequency to be checked after being post-processed according to the post-processing module 35 are believed
Breath, carry out video duplicate checking..
Further, as shown in figure 4, the device also includes:Image pre-processing module 41.
Image pre-processing module 41, for each key frame to be carried out into image preprocessing respectively.
Wherein, image preprocessing includes at least one of following:Regular size processing and picture whitening processing.
Input module 32, specifically for each key frame after image preprocessing is inputted into default Feature Selection Model,
Obtain depth characteristic corresponding to each key frame difference.
Further, as shown in figure 4, video duplicate checking module 36 includes:Determining unit 361, video duplicate checking unit 362.
Determining unit 361, for the characteristic information according to the attention frequency to be checked after post processing, and quantified by product
Product Quantization, determine the video features index of attention frequency to be checked.
Video duplicate checking unit 362, for the video features index of the attention frequency to be checked determined according to determining unit 361, enter
Row video duplicate checking.
Video duplicate checking module 36, specifically for judging that video features index corresponding to each video difference whether there is phase
Together.
Video duplicate checking module 36, specifically it is additionally operable to, when identical video features index be present, determine identical video spy
Each video corresponding to sign index repeats.
Further, as shown in figure 4, the device also includes:Determining module 42, removing module 43.
Determining module 42, for from each video repeated, determining video to be deleted.
Removing module 43, for deleting the video to be deleted.
The embodiments of the invention provide a kind of device of video duplicate checking, the embodiment of the present invention extracts to be checked according to preset rules
Pay attention to the key frame of frequency, each key frame is then inputted into default Feature Selection Model, each key frame is obtained and corresponds to respectively
Depth characteristic, then by each key frame respectively corresponding to depth characteristic carry out characteristics of image pond, after obtaining pondization processing
Each key frame respectively corresponding to depth characteristic;Then by depth corresponding to each key frame difference after pondization processing
Feature is integrated and encoded, obtain it is to be checked attention frequency characteristic information, then by it is to be checked attention frequency characteristic information by with
Lower at least one processing mode is post-processed, and the characteristic information of the attention frequency to be checked after being post-processed, processing mode includes:
Feature Dimension Reduction processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video duplicate checking.
I.e. the embodiment of the present invention can determine what is uploaded by carrying out duplicate checking, such as the progress duplicate checking of the video to having uploaded to video
The very high video of repetition video or similarity in video, so as to improve the degree of accuracy of the website to video ranking, and
Due to carrying out duplicate checking to the video that has uploaded, therefore reduce the probability of repetition video and the higher video of similarity, when with
When video is searched at family, required video can be more accurately found, and then the Experience Degree of user can be lifted.
The device of video duplicate checking provided in an embodiment of the present invention can realize the embodiment of the method for above-mentioned offer, concrete function
The explanation referred in embodiment of the method is realized, will not be repeated here.
The embodiments of the invention provide a kind of computer-readable recording medium, meter is stored with computer-readable recording medium
Calculation machine program, the program realize above-mentioned video duplicate checking method when being executed by processor.
The embodiments of the invention provide a kind of computer-readable recording medium, the embodiment of the present invention extracts according to preset rules
The key frame to be checked for paying attention to frequency, then inputs default Feature Selection Model by each key frame, obtains each key frame difference
Corresponding depth characteristic, depth characteristic corresponding to each key frame difference is then subjected to characteristics of image pond, obtains pond Hua Chu
Depth characteristic corresponding to each key frame difference after reason;Then by corresponding to each key frame difference after pondization processing
Depth characteristic is integrated and encoded, and obtains the characteristic information of attention frequency to be checked, then leads to the characteristic information of attention frequency to be checked
Cross following at least one processing mode to be post-processed, the characteristic information of the attention frequency to be checked after being post-processed, processing mode
Including:Feature Dimension Reduction processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video
Duplicate checking.I.e. the embodiment of the present invention can determine on by carrying out duplicate checking, such as the progress duplicate checking of the video to having uploaded to video
The very high video of repetition video or similarity in the video of biography, so as to improve the degree of accuracy of the website to video ranking,
And due to carrying out duplicate checking to the video that has uploaded, therefore the probability of repetition video and the higher video of similarity is reduced,
When user searches video, required video can be more accurately found, and then the Experience Degree of user can be lifted.
Computer-readable recording medium provided in an embodiment of the present invention can realize the embodiment of the method for above-mentioned offer, specifically
Function realizes the explanation referred in embodiment of the method, will not be repeated here.
The embodiments of the invention provide a kind of computing device, including:Processor, memory, communication interface and communication bus,
Processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes the side of the above-mentioned video duplicate checking of computing device
Operated corresponding to method.
The embodiments of the invention provide a kind of computing device, the embodiment of the present invention extracts to be checked pay attention to frequently according to preset rules
Key frame, each key frame is then inputted into default Feature Selection Model, obtain each key frame respectively corresponding to depth
Feature, depth characteristic corresponding to each key frame difference is then subjected to characteristics of image pond, obtained each after pondization processing
Depth characteristic corresponding to key frame difference;Then by the way that to each key frame after pondization processing, corresponding depth characteristic is entered respectively
Row integrate and coding, obtain it is to be checked attention frequency characteristic information, then by it is to be checked attention frequency characteristic information by it is following at least
A kind of processing mode is post-processed, and the characteristic information of the attention frequency to be checked after being post-processed, processing mode includes:Feature drops
Dimension processing;Decorrelative transformation, then according to the characteristic information of the attention frequency to be checked after post processing, carry out video duplicate checking.That is this hair
Bright embodiment can determine in the video uploaded by carrying out duplicate checking, such as the progress duplicate checking of the video to having uploaded to video
Repetition video or the very high video of similarity, so as to improve the degree of accuracy of the website to video ranking, and due to right
The video that has uploaded carries out duplicate checking, therefore reduces the probability of repetition video and the higher video of similarity, when user searches
During video, required video can be more accurately found, and then the Experience Degree of user can be lifted.
Computing device provided in an embodiment of the present invention can realize the embodiment of the method for above-mentioned offer, and concrete function is realized please
Referring to the explanation in embodiment of the method, will not be repeated here.
Those skilled in the art of the present technique are appreciated that the present invention includes being related to for performing in operation described herein
One or more equipment.These equipment can specially be designed and manufactured for required purpose, or can also be included general
Known device in computer.These equipment have the computer program being stored in it, and these computer programs are optionally
Activation or reconstruct.Such computer program can be stored in equipment (for example, computer) computer-readable recording medium or be stored in
E-command and it is coupled to respectively in any kind of medium of bus suitable for storage, the computer-readable medium is included but not
Be limited to any kind of disk (including floppy disk, hard disk, CD, CD-ROM and magneto-optic disk), ROM (Read-Only Memory, only
Read memory), RAM (Random Access Memory, immediately memory), EPROM (Erasable Programmable
Read-Only Memory, Erarable Programmable Read only Memory), EEPROM (Electrically Erasable
Programmable Read-Only Memory, EEPROM), flash memory, magnetic card or light card
Piece.It is, computer-readable recording medium includes storing or transmitting any Jie of information in the form of it can read by equipment (for example, computer)
Matter.
Those skilled in the art of the present technique be appreciated that can with computer program instructions come realize these structure charts and/or
The combination of each frame and these structure charts and/or the frame in block diagram and/or flow graph in block diagram and/or flow graph.This technology is led
Field technique personnel be appreciated that these computer program instructions can be supplied to all-purpose computer, special purpose computer or other
The processor of programmable data processing method is realized, so as to pass through the processing of computer or other programmable data processing methods
Device performs the scheme specified in the frame of structure chart and/or block diagram and/or flow graph disclosed by the invention or multiple frames.
Those skilled in the art of the present technique are appreciated that in the various operations discussed in the present invention, method, flow
Step, measure, scheme can be replaced, changed, combined or deleted.Further, it is each with having been discussed in the present invention
Kind operation, method, other steps in flow, measure, scheme can also be replaced, changed, reset, decomposed, combined or deleted.
Further, it is of the prior art to have and the step in the various operations disclosed in the present invention, method, flow, measure, scheme
It can also be replaced, changed, reset, decomposed, combined or deleted.
Described above is only some embodiments of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (10)
- A kind of 1. video duplicate checking method, it is characterised in that including:The key frame to be checked for paying attention to frequency is extracted according to preset rules;Each key frame is inputted into default Feature Selection Model, obtains depth characteristic corresponding to each key frame difference;Depth characteristic corresponding to each key frame difference is subjected to characteristics of image pond, obtains each pass after pondization processing Depth characteristic corresponding to key frame difference;By the way that to each key frame after pondization processing, corresponding depth characteristic is integrated and encoded respectively, obtain described The characteristic information to be checked for paying attention to frequency;The characteristic information to be checked for paying attention to frequency is post-processed by following at least one processing mode, after obtaining post processing Attention frequency to be checked characteristic information, the processing mode includes:Feature Dimension Reduction processing;Decorrelative transformation;According to the characteristic information of the attention frequency to be checked after the post processing, video duplicate checking is carried out.
- 2. according to the method for claim 1, it is characterised in that the default Feature Selection Model is by being rolled up to depth Product neural metwork training obtains.
- 3. method according to claim 1 or 2, it is characterised in that described to put forward the default feature of each key frame input Modulus type, obtain each key frame respectively corresponding to depth characteristic the step of before, in addition to:Each key frame is subjected to image preprocessing respectively, described image pretreatment includes at least one of following:Regular chi Very little processing and picture whitening processing;Wherein, it is described that each key frame is inputted into default Feature Selection Model, obtain depth corresponding to each key frame difference The step of feature, including:Each key frame after image preprocessing is inputted into default Feature Selection Model, obtained corresponding to each key frame difference Depth characteristic.
- 4. according to the method for claim 1, it is characterised in that the spy of the attention frequency to be checked according to after the post processing Reference cease, carry out video duplicate checking the step of, including:According to the characteristic information of the attention frequency to be checked after the post processing, and Product is quantified by product Quantization, determine the video features index to be checked for paying attention to frequency;Indexed according to the video features to be checked for paying attention to frequency, carry out video duplicate checking.
- 5. according to the method for claim 4, it is characterised in that the mode of the video duplicate checking, including:Video features index is with the presence or absence of identical corresponding to judging each video respectively;If identical video features index be present, it is determined that each video corresponding to identical video features index repeats.
- 6. according to the method for claim 5, it is characterised in that also include:From each video repeated, video to be deleted is determined, and delete the video to be deleted.
- A kind of 7. device of video duplicate checking, it is characterised in that including:Abstraction module, for extracting the key frame to be checked for paying attention to frequency according to preset rules;Input module, each key frame for the abstraction module to be extracted input default Feature Selection Model, obtain each Depth characteristic corresponding to individual key frame difference;Characteristics of image pond module, for depth characteristic corresponding to each key frame difference to be carried out into characteristics of image pond, Obtain depth characteristic corresponding to each key frame difference after pondization processing;Coding module is integrated, for by being corresponded to respectively to each key frame after the processing of described image feature pool module pondization Depth characteristic integrated and encoded, obtain it is described it is to be checked pay attention to frequency characteristic information;Post-processing module, for locating after the characteristic information to be checked for paying attention to frequency is carried out by following at least one processing mode Reason, the characteristic information of the attention frequency to be checked after being handled, the processing mode include:Feature Dimension Reduction processing;Decorrelative transformation;Video duplicate checking module, for according to the post-processing module post-process after attention frequency to be checked characteristic information, depending on Frequency duplicate checking.
- 8. device according to claim 7, it is characterised in that the default Feature Selection Model is by being rolled up to depth Product neural metwork training obtains.
- 9. a kind of computer-readable recording medium, it is characterised in that be stored with computer on the computer-readable recording medium Program, the program realize the method described in claim any one of 1-6 when being executed by processor.
- 10. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the video duplicate checking method any one of 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711008924.6A CN107665261B (en) | 2017-10-25 | 2017-10-25 | Video duplicate checking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711008924.6A CN107665261B (en) | 2017-10-25 | 2017-10-25 | Video duplicate checking method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107665261A true CN107665261A (en) | 2018-02-06 |
CN107665261B CN107665261B (en) | 2021-06-18 |
Family
ID=61098062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711008924.6A Active CN107665261B (en) | 2017-10-25 | 2017-10-25 | Video duplicate checking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107665261B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109325294A (en) * | 2018-09-25 | 2019-02-12 | 云南电网有限责任公司电力科学研究院 | A kind of evidence characterization construction method of fired power generating unit air preheater performance state |
CN109684506A (en) * | 2018-11-22 | 2019-04-26 | 北京奇虎科技有限公司 | A kind of labeling processing method of video, device and calculate equipment |
CN110020093A (en) * | 2019-04-08 | 2019-07-16 | 深圳市网心科技有限公司 | Video retrieval method, edge device, video frequency searching device and storage medium |
CN110046279A (en) * | 2019-04-18 | 2019-07-23 | 网易传媒科技(北京)有限公司 | Prediction technique, medium, device and the calculating equipment of video file feature |
CN110163061A (en) * | 2018-11-14 | 2019-08-23 | 腾讯科技(深圳)有限公司 | For extracting the method, apparatus, equipment and computer-readable medium of video finger print |
WO2019184522A1 (en) * | 2018-03-29 | 2019-10-03 | 北京字节跳动网络技术有限公司 | Method and apparatus for determining duplicate video |
CN110727769A (en) * | 2018-06-29 | 2020-01-24 | 优视科技(中国)有限公司 | Corpus generation method and device, and man-machine interaction processing method and device |
CN110796088A (en) * | 2019-10-30 | 2020-02-14 | 行吟信息科技(上海)有限公司 | Video similarity determination method and device |
CN111241344A (en) * | 2020-01-14 | 2020-06-05 | 新华智云科技有限公司 | Video duplicate checking method, system, server and storage medium |
CN113065025A (en) * | 2021-03-31 | 2021-07-02 | 厦门美图之家科技有限公司 | Video duplicate checking method, device, equipment and storage medium |
CN113761282A (en) * | 2021-05-11 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Video duplicate checking method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130179420A1 (en) * | 2012-01-09 | 2013-07-11 | Brightedge Technologies, Inc. | Search engine optimization for category specific search results |
CN103530657A (en) * | 2013-09-26 | 2014-01-22 | 华南理工大学 | Deep learning human face identification method based on weighting L2 extraction |
CN104408469A (en) * | 2014-11-28 | 2015-03-11 | 武汉大学 | Firework identification method and firework identification system based on deep learning of image |
US20150269442A1 (en) * | 2014-03-18 | 2015-09-24 | Vivotek Inc. | Monitoring system and related image searching method |
CN105095902A (en) * | 2014-05-23 | 2015-11-25 | 华为技术有限公司 | Method and apparatus for extracting image features |
CN105138993A (en) * | 2015-08-31 | 2015-12-09 | 小米科技有限责任公司 | Method and device for building face recognition model |
CN106021575A (en) * | 2016-05-31 | 2016-10-12 | 北京奇艺世纪科技有限公司 | Retrieval method and device for same commodities in video |
CN106649663A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Video copy detection method based on compact video representation |
CN107025267A (en) * | 2017-03-01 | 2017-08-08 | 国政通科技股份有限公司 | Based on the method and system for extracting Video Key logical message retrieval video |
-
2017
- 2017-10-25 CN CN201711008924.6A patent/CN107665261B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130179420A1 (en) * | 2012-01-09 | 2013-07-11 | Brightedge Technologies, Inc. | Search engine optimization for category specific search results |
CN103530657A (en) * | 2013-09-26 | 2014-01-22 | 华南理工大学 | Deep learning human face identification method based on weighting L2 extraction |
US20150269442A1 (en) * | 2014-03-18 | 2015-09-24 | Vivotek Inc. | Monitoring system and related image searching method |
CN105095902A (en) * | 2014-05-23 | 2015-11-25 | 华为技术有限公司 | Method and apparatus for extracting image features |
CN104408469A (en) * | 2014-11-28 | 2015-03-11 | 武汉大学 | Firework identification method and firework identification system based on deep learning of image |
CN105138993A (en) * | 2015-08-31 | 2015-12-09 | 小米科技有限责任公司 | Method and device for building face recognition model |
CN106021575A (en) * | 2016-05-31 | 2016-10-12 | 北京奇艺世纪科技有限公司 | Retrieval method and device for same commodities in video |
CN106649663A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Video copy detection method based on compact video representation |
CN107025267A (en) * | 2017-03-01 | 2017-08-08 | 国政通科技股份有限公司 | Based on the method and system for extracting Video Key logical message retrieval video |
Non-Patent Citations (1)
Title |
---|
张婷等: "基于ICA运动目标检测的自嵌入视频认证水印", 《计算机工程与设计》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019184522A1 (en) * | 2018-03-29 | 2019-10-03 | 北京字节跳动网络技术有限公司 | Method and apparatus for determining duplicate video |
CN110324660A (en) * | 2018-03-29 | 2019-10-11 | 北京字节跳动网络技术有限公司 | A kind of judgment method and device repeating video |
US11265598B2 (en) | 2018-03-29 | 2022-03-01 | Seijing Bytedance Network Technology Co., Ltd. | Method and device for determining duplicate video |
CN110727769B (en) * | 2018-06-29 | 2024-04-19 | 阿里巴巴(中国)有限公司 | Corpus generation method and device and man-machine interaction processing method and device |
CN110727769A (en) * | 2018-06-29 | 2020-01-24 | 优视科技(中国)有限公司 | Corpus generation method and device, and man-machine interaction processing method and device |
CN109325294A (en) * | 2018-09-25 | 2019-02-12 | 云南电网有限责任公司电力科学研究院 | A kind of evidence characterization construction method of fired power generating unit air preheater performance state |
CN109325294B (en) * | 2018-09-25 | 2023-08-11 | 云南电网有限责任公司电力科学研究院 | Evidence characterization construction method for performance state of air preheater of thermal power generating unit |
CN110163061A (en) * | 2018-11-14 | 2019-08-23 | 腾讯科技(深圳)有限公司 | For extracting the method, apparatus, equipment and computer-readable medium of video finger print |
CN109684506A (en) * | 2018-11-22 | 2019-04-26 | 北京奇虎科技有限公司 | A kind of labeling processing method of video, device and calculate equipment |
CN109684506B (en) * | 2018-11-22 | 2023-10-20 | 三六零科技集团有限公司 | Video tagging processing method and device and computing equipment |
CN110020093A (en) * | 2019-04-08 | 2019-07-16 | 深圳市网心科技有限公司 | Video retrieval method, edge device, video frequency searching device and storage medium |
CN110046279B (en) * | 2019-04-18 | 2022-02-25 | 网易传媒科技(北京)有限公司 | Video file feature prediction method, medium, device and computing equipment |
CN110046279A (en) * | 2019-04-18 | 2019-07-23 | 网易传媒科技(北京)有限公司 | Prediction technique, medium, device and the calculating equipment of video file feature |
CN110796088A (en) * | 2019-10-30 | 2020-02-14 | 行吟信息科技(上海)有限公司 | Video similarity determination method and device |
CN111241344A (en) * | 2020-01-14 | 2020-06-05 | 新华智云科技有限公司 | Video duplicate checking method, system, server and storage medium |
CN111241344B (en) * | 2020-01-14 | 2023-09-05 | 新华智云科技有限公司 | Video duplicate checking method, system, server and storage medium |
CN113065025A (en) * | 2021-03-31 | 2021-07-02 | 厦门美图之家科技有限公司 | Video duplicate checking method, device, equipment and storage medium |
CN113761282A (en) * | 2021-05-11 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Video duplicate checking method and device, electronic equipment and storage medium |
CN113761282B (en) * | 2021-05-11 | 2023-07-25 | 腾讯科技(深圳)有限公司 | Video duplicate checking method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107665261B (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107665261A (en) | Video duplicate checking method and device | |
CN107679250B (en) | Multi-task layered image retrieval method based on deep self-coding convolutional neural network | |
CN110866140B (en) | Image feature extraction model training method, image searching method and computer equipment | |
CN109783582B (en) | Knowledge base alignment method, device, computer equipment and storage medium | |
CN111680176B (en) | Remote sensing image retrieval method and system based on attention and bidirectional feature fusion | |
US8232996B2 (en) | Image learning, automatic annotation, retrieval method, and device | |
CN107506793B (en) | Garment identification method and system based on weakly labeled image | |
US9256617B2 (en) | Apparatus and method for performing visual search | |
CN110929080A (en) | Optical remote sensing image retrieval method based on attention and generation countermeasure network | |
CN107705805A (en) | Audio duplicate checking method and device | |
CN114358188A (en) | Feature extraction model processing method, feature extraction model processing device, sample retrieval method, sample retrieval device and computer equipment | |
CN107844541A (en) | Image duplicate checking method and device | |
Wang et al. | Aspect-ratio-preserving multi-patch image aesthetics score prediction | |
CN113837308B (en) | Knowledge distillation-based model training method and device and electronic equipment | |
CN111935487B (en) | Image compression method and system based on video stream detection | |
CN110069647B (en) | Image tag denoising method, device, equipment and computer readable storage medium | |
CN112488301A (en) | Food inversion method based on multitask learning and attention mechanism | |
CN112579816A (en) | Remote sensing image retrieval method and device, electronic equipment and storage medium | |
CN106708904A (en) | Image search method and apparatus | |
CN113761262B (en) | Image retrieval category determining method, system and image retrieval method | |
CN115908907A (en) | Hyperspectral remote sensing image classification method and system | |
CN107203585B (en) | Solanaceous image retrieval method and device based on deep learning | |
CN113032612B (en) | Construction method of multi-target image retrieval model, retrieval method and device | |
CN115600017A (en) | Feature coding model training method and device and media object recommendation method and device | |
Yin et al. | Animal image retrieval algorithms based on deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |