CN102222103A - Method and device for processing matching relationship of video content - Google Patents

Method and device for processing matching relationship of video content Download PDF

Info

Publication number
CN102222103A
CN102222103A CN 201110169978 CN201110169978A CN102222103A CN 102222103 A CN102222103 A CN 102222103A CN 201110169978 CN201110169978 CN 201110169978 CN 201110169978 A CN201110169978 A CN 201110169978A CN 102222103 A CN102222103 A CN 102222103A
Authority
CN
China
Prior art keywords
video
video content
content
features
match
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110169978
Other languages
Chinese (zh)
Other versions
CN102222103B (en
Inventor
苗广艺
张名举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCTV INTERNATIONAL NETWORKS Co Ltd
Original Assignee
CCTV INTERNATIONAL NETWORKS Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCTV INTERNATIONAL NETWORKS Co Ltd filed Critical CCTV INTERNATIONAL NETWORKS Co Ltd
Priority to CN 201110169978 priority Critical patent/CN102222103B/en
Publication of CN102222103A publication Critical patent/CN102222103A/en
Application granted granted Critical
Publication of CN102222103B publication Critical patent/CN102222103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for processing a matching relationship of a video content. The method comprises the following steps of: acquiring the video content, and determining a video type of the video content according to parameters of the video content; extracting video characteristics of the video content according to the video type; and querying a matched video content corresponding to the video content in a video characteristic library according to the video characteristics, and generating a video association relationship library, wherein under the condition of successfully querying the matched video content, an association relationship between the video content and the matched video content, which are successfully matched, is stored in the video association relationship library. Through the method, by automatically matching the video content, the efficiency for searching video is improved, and the manpower cost is saved.

Description

The disposal route of the matching relationship of video content and device
Technical field
The present invention relates to video field, in particular to a kind of disposal route and device of matching relationship of video content.
Background technology
Along with the high speed development of media industry and Internet video industry, video website all has a large amount of new video renewals to reach the standard grade every day.The video made from website oneself that these videos are uploaded except the online friend, have much to come from the medium producer.The source video sequence of medium producer generally all is the live telecast signal, because TV station's television channel quantity is very many, the number of videos that the medium producer is made every day is also just very huge, has covered various TV programme.
In general, the video that the medium producer provides has two kinds of common forms, and a kind of is the complete programs video, and another kind is the program fragment video.The complete programs video is longer relatively, such as a complete ball match, a complete news hookup etc.The program fragment video generally all is a program fragment after the montage, and is shorter relatively, such as the camera lens of a soccer goal-shooting, news item etc.Two kinds of videos all come from identical video signal source, though be different video files, on content, the program fragment video is the part of complete programs video, can find time corresponding section youngster on the complete programs video.
Therefore, exist a kind of incidence relation between program fragment video and the complete programs video, i.e. corresponding complete programs video of program fragment video, it appears on the time period youngster of this complete programs video.This incidence relation is extremely important, and this information has been arranged, for a program fragment video, and the time point that can find the complete programs video that comprises it and its to occur; For the complete programs video, can find it to comprise what program fragments, and where the time point that each program fragment occurs is.And this related information is real based on video content, and it has excavated the relation on the content aspect between the video, is a kind of senior related information.We claim the incidence relation of this novelty to be " content repeats related ".
Existing video website all has a large amount of complete programs videos and program fragment video to reach the standard grade every day, but does not also exist content to repeat incidence relation between them, because there is not a kind of effective method can find this incidence relation between them easily.
At the problems referred to above, can realize that incidence relation between the Internet video, these inventory informations are Word messages of artificial input by the inventory information of artificial input video, such as information such as video name, name of tv column, performer's tabulations.Therefore this traditional incidence relation depends critically upon artificial input, and do not excavate the information on the video content aspect, follow-up when entering manually the workflow that complete programs video and program fragment video are mated, not only to search multitude of video, also to search time corresponding point on each video, because the number of videos that the medium producer is produced every day is very huge, the task amount of manually searching is very huge, may finish hardly.
At present at correlation technique pass through manually to set up incidence relation between the Internet video, cause in the video content matching process searching work heavy, and inefficient problem, effective solution is not proposed at present as yet.
Summary of the invention
Pass through manually to set up incidence relation between the Internet video at correlation technique, cause in the video content matching process searching work heavy, and inefficient problem, do not propose effective problem as yet at present and propose the present invention, for this reason, fundamental purpose of the present invention is to provide a kind of disposal route and device of matching relationship of video content, to address the above problem.
To achieve these goals, according to an aspect of the present invention, a kind of disposal route of matching relationship of video content is provided, and the disposal route of the matching relationship of this video content comprises: obtain video content, and determine the video type of video content according to the parameter of video content; Extract the video features of video content according to video type; In the video features storehouse, inquire about the match video content corresponding according to video features with video content, and generation video incidence relation library, wherein, under the situation that successfully inquires the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library.
Further, video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of video content the video type of video content comprises: obtain video content; Attribute flags by the checking video content is determined the video type of video content, perhaps, determines the video type of video content according to the video length of video content; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during smaller or equal to second threshold value, video content is the program fragment video, and first threshold is greater than second threshold value.
Further, extract the video features of video content, and in the video features storehouse, inquire about the match video content corresponding according to video features and comprise: the time window of extraction predetermined length in video content with video content; The video features of uniform sampling predetermined number in time window; The video features of the predetermined number that samples is made up, with the window video features in the acquisition time window; The match video content of inquiry and window video features coupling in the video features storehouse.
Further, inquiry comprises with the match video content of window video features coupling in the video features storehouse: all the window video features in window video features and the video features storehouse are carried out distance relatively, wherein, under the situation of distance smaller or equal to validation value, the match is successful for the window video features.
Further, video features comprises characteristics of image and audio frequency characteristics, and the step of extracting the characteristics of image of video content comprises: the image of video content is carried out piecemeal; Extract the characteristics of image of each piece image; The characteristics of image of each piece image correspondence is made up, to obtain characteristics of image; And the step of extracting the audio frequency characteristics of video content comprises: video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Extract the audio frequency characteristics in each timeslice interval.
Further, before obtaining video content, method also comprises: make video content obtaining video file, and set up the video features storehouse, the video features storehouse comprises: complete programs feature database and program fragment feature database.
Further, in the video features storehouse, inquire about the match video content corresponding according to video features with video content, and generation video incidence relation library comprises: be under the situation of complete programs video at video content, video features in each time window of complete programs video is mated in the program fragment feature database, obtaining the one or more first program fragment videos corresponding, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library with the complete programs video; Perhaps, at video content is under the situation of program fragment video, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding, and the incidence relation between the program fragment video and the first complete programs video is saved to the video incidence relation library with the program fragment video.
Further, inquiring about the match video content corresponding with video content according to video features in the video features storehouse, and generating after the video incidence relation library, method also comprises: read video content; The match video content of inquiry video content correspondence in the video incidence relation library; Searching under the match video content case of successful match video of displaying video content correspondence.
Further, searching under the match video content case of successful, the match video of displaying video content correspondence comprises: be under the situation of program fragment video at video content, play-over the first complete programs video that finds; Perhaps, be under the situation of complete programs video at video content, play-over the one or more first program fragment videos that find, and the form of one or more first program fragment videos with label is labeled on the progress bar of complete programs video.
To achieve these goals, according to a further aspect in the invention, a kind for the treatment of apparatus of matching relationship of video content is provided, the treating apparatus of the matching relationship of this video content comprises: the video type processing unit is used for determining according to the parameter of the video content that gets access to the video type of video content; Extraction unit is used for extracting according to video type the video features of video content; The matching treatment unit, be used for according to video features in the video features storehouse inquiry match video content corresponding with video content, and generation video incidence relation library, wherein, under the situation that successfully inquires the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library.
Further, the video type processing unit comprises: receiver module is used to obtain video content; Authentication module 102 is used for determining the video type of video content by the attribute flags of checking video content; Perhaps, determine the video type of video content according to the video length of video content, video type comprises: complete programs video and program fragment video; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during smaller or equal to second threshold value, video content is the program fragment video, and first threshold is greater than second threshold value.
Further, extraction unit comprises: acquisition module is used for the time window at the one or more predetermined lengths of video content extraction; Sampling module is used for the video features of uniform sampling predetermined number in time window.
Further, the matching treatment unit comprises: composite module, and the video features that is used for the predetermined number that will sample makes up, with the window video features in the acquisition time window; Enquiry module is used in the match video content of video features storehouse inquiry with window video features coupling.
Further, enquiry module comprises: comparison module, be used for all window video features in window video features and video features storehouse are carried out distance relatively, and wherein, under the situation of distance smaller or equal to validation value, the match is successful for the window video features.
Further, extraction unit is first extraction unit or second extraction unit, and wherein, first extraction unit is used to extract the characteristics of image of video content, and first extraction unit comprises: first divides module, is used for the image of video content is carried out piecemeal; First acquisition module is used to extract the characteristics of image of each piece image; Second composite module is used for the characteristics of image of each piece image correspondence is made up, to obtain characteristics of image; And second extraction unit be used to extract the audio frequency characteristics of video content, first extraction unit comprises: second divides module, is used for video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Second acquisition module is used to extract the audio frequency characteristics in each timeslice interval.
Further, device also comprises: create the feature database unit, be used for setting up the video features storehouse when making video content, the video features storehouse comprises: complete programs feature database and program fragment feature database.
Further, the matching treatment unit comprises the first matching treatment unit or the second matching treatment unit, wherein, it is under the situation of complete programs video that the first matching treatment unit is used at video content, video features in each time window of complete programs video is mated in the program fragment feature database, obtaining the one or more first program fragment videos corresponding, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library with the complete programs video; It is under the situation of program fragment video that the second matching treatment unit is used at video content, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding, and the incidence relation between the program fragment video and the first complete programs video is saved to the video incidence relation library with the program fragment video.
Further, device also comprises: reading unit is used to read video content; The query processing unit is used for the match video content in video incidence relation library inquiry video content correspondence; Broadcast unit is used for searching under the match video content case of successful match video of displaying video content correspondence.
Further, broadcast unit comprises: first broadcast unit, and being used at video content is under the situation of program fragment video, play-overs the first complete programs video that finds; Perhaps, second broadcast unit, being used at video content is under the situation of complete programs video, play-overs the one or more first program fragment videos that find, and the form of one or more first program fragment videos with label is labeled on the progress bar of complete programs video.
By the present invention, adopt and to obtain video content, and determine the video type of video content according to the parameter of video content, video type comprises: complete programs video and program fragment video; Extract the video features of video content according to video type; In the video features storehouse, inquire about the match video content corresponding according to video features with video content, and generation video incidence relation library, wherein, under the situation that successfully inquires the match video content, video content and the corresponding match video content thereof that the match is successful are saved to the video incidence relation library, the video incidence relation library also comprises the incidence relation between video content and the corresponding match video content thereof, what solved related art passes through manually to set up incidence relation between the Internet video, cause in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of search video by automatic match video content, saved the effect of human cost.
Description of drawings
Accompanying drawing described herein is used to provide further understanding of the present invention, constitutes the application's a part, and illustrative examples of the present invention and explanation thereof are used to explain the present invention, do not constitute improper qualification of the present invention.In the accompanying drawings:
Fig. 1 is the structural representation according to the treating apparatus of the matching relationship of the video content of the embodiment of the invention;
Fig. 2 is the synoptic diagram to image block according to the embodiment of the invention;
Fig. 3 is the process flow diagram according to the disposal route of the matching relationship of the video content of the embodiment of the invention;
Fig. 4 is the process flow diagram according to the video features extracting method of the video content of the embodiment of the invention;
Fig. 5 is the method flow diagram according to the establishment video incidence relation library of the embodiment of the invention;
Fig. 6 is the process flow diagram of the video content that inquires of the broadcast according to the embodiment of the invention.
Embodiment
Need to prove that under the situation of not conflicting, embodiment and the feature among the embodiment among the application can make up mutually.Describe the present invention below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
Fig. 1 is the structural representation according to the treating apparatus of the matching relationship of the video content of the embodiment of the invention.As shown in Figure 1, the treating apparatus of the matching relationship of this video content comprises: video type processing unit 10 is used for determining according to the parameter of the video content that gets access to the video type of video content; Extraction unit 30 is used for extracting according to video type the video features of video content; Matching treatment unit 50, be used for according to video features in the video features storehouse inquiry match video content corresponding with video content, and generation video incidence relation library, wherein, under the situation that successfully inquires the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library, promptly the ID sign and the corresponding match video content thereof of the video content that the match is successful can be saved to the video incidence relation library, simultaneously the incidence relation between the match video content of video content and correspondence thereof be saved to the video incidence relation library.
The above embodiments of the present application realize that by matching treatment unit 50 video content repeats the foundation of incidence relation, adopt then based on video features and carry out the video content Matching Algorithm, can substitute manual working fully, the content of having set up in the match video of automatic all video contents of analyzing and processing between them repeats incidence relation, very high search efficiency is not only arranged, and the time point that inquires is very accurate, thereby substituted the incidence relation of manually setting up between the Internet video, solved that manually to carry out in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of search video by automatic match video content, saved human cost.And further can realize content-based coupling in the source that video content repeats search video fragment in the incidence relation, and content-based coupling repeats to generate automatically in the incidence relation wonderful at video content.
As shown in Figure 1, the video type processing unit 10 in the above embodiment of the present invention can comprise: receiver module 101 is used to obtain video content; Authentication module 102 is used for determining the video type of video content by the attribute flags of checking video content; Perhaps, determine the video type of video content according to the video length of video content, video type comprises: complete programs video and program fragment video; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during smaller or equal to second threshold value, video content is the program fragment video; When the video length of video content was between the first threshold and second threshold value, video content was the complete programs video, also be the program fragment video, and first threshold is greater than second threshold value.
The above-mentioned video type processing unit 10 of the present invention can realize distinguishing according to the attribute of video itself type of the video content that receives, and also can divide the type of this video content by the video length threshold value.
Concrete, in some application scenario, video is when making, and having an attribute, to come this video of mark be a complete programs or a program fragment, in this case, directly just can classify to video according to this attribute.In other application scenarios, video does not have corresponding attribute flags, can only come it is classified according to video length, for example: the method for the overlapping classification of dual threshold.Set two the threshold value first threshold Threshold1 and the second threshold value Threshold2, wherein first threshold Threshold1 is less than the second threshold value Threshold2.If video length is less than Threshold1, think that then it is the program fragment video, if video length is greater than Threshold2, think that then it is the complete programs video, if video length is between Threshold1 and Threshold2, thinking that then it is the program fragment video, also is the complete programs video.
Extraction unit 30 in the above embodiment of the present invention can comprise following functional module: acquisition module 301 is used for the time window at video content extraction predetermined length; And sampling module 302, be used for the video features of uniform sampling predetermined number in time window.The selection and the extraction of the video features of video content have been realized by the above-mentioned functions module.
Particularly, extraction unit 30 among the application can adopt the mode of carrying out characteristic matching at a time window s in second, in this time window s uniform sampling f feature in second, these features are compared, result relatively is as the matching result of video in this time window.For example, we adopt s=10 second as a time window, evenly choose f=10 feature then and connect combination in this window, as the feature of this time window.The more existing technology that adopts single-frame images or single timeslice when two videos carry out characteristic matching of said method has reduced the error of matching result, makes matching effect better.
At different video features, the extraction unit 30 that relates in the foregoing description can be first extraction unit, it also can be second extraction unit, the video features that relates in first extraction unit and second extraction unit can be characteristics of image or audio frequency characteristics, characteristics of image is following any one feature: integral image feature, color histogram feature and YUV color characteristic, audio frequency characteristics are following any one feature: the Mel frequency is fallen general coefficient characteristics and fourier coefficient feature.This first extraction unit and second extraction unit adopt characteristics of image and audio frequency characteristics double characteristic to describe video, in the video coupling, with characteristics of image and the separately independent coupling of audio frequency characteristics, all the match is successful to have only two kinds of features, just be video the match is successful, can guarantee higher matching accuracy rate like this.
Preferably, first extraction unit is used to extract the characteristics of image of video content, and first extraction unit comprises: first divides module, is used for the image of video content is carried out piecemeal; First acquisition module is used to extract the characteristics of image of each piece image, and this characteristics of image can be the histogram characteristics of image; Second composite module is used for the characteristics of image of each piece image correspondence is made up, to obtain characteristics of image.
Preferably, second extraction unit is used to extract the audio frequency characteristics of video content, and first extraction unit comprises: second divides module, be used for video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Second acquisition module is used to extract the audio frequency characteristics in each timeslice interval.Concrete, the Mel frequency that second extraction unit can be used to extract video content is fallen general coefficient characteristics, and this first extraction unit comprises: second divides module, be used for the time to schedule length be divided into uniform timeslice; Second acquisition module, the Mel frequency that is used to extract the ticket reserving time leaf length fall general coefficient characteristics, and adjacent Mel frequency falls the timeslice overlaid of general coefficient characteristics, and add the differential parameter of voice behavioral characteristics in the Mel frequency is fallen general coefficient characteristics.
Having much owing to describe the feature of picture material, is to describe the principle of the overall condition of video, the first extraction unit selective extraction integral image feature, rather than local feature according to purpose.In order to guarantee the speed of feature extraction, can select the color histogram category feature, this feature both can have been described the overall condition of image, can calculate extraction fast again.Preferably, on color space, can select the YUV color space, it is compared with the RGB color space, meets the visual characteristic of human eye more.
In addition, because the histogram feature of entire image does not comprise the positional information of image, in order to make characteristics of image comprise certain positional information, can adopt image is carried out piecemeal, then each piece is extracted histogram feature respectively, with after these characteristics combination as the global feature of image.
The mode of image block as shown in Figure 2 at first, cuts into the 9 palace lattice of 3x3 with image, and the ratio that horizontal direction and vertical direction adopted all is 0.25: 0.5: 0.25, promptly 1: 2: 1.Cut down like this, middle layout has accounted for the area of image 1/4th, and 4 lattice at 4 angles have accounted for image 1/4th areas altogether, and 4 lattice on remaining 4 limits have accounted for remaining 1/2nd area.For each lattice, give different weights, middle layout is most important, and it is least important that weight is 4 lattice of 4, four jiaos to the maximum, and it is zero that weight is composed, and other 4 lattice weights are 1.Then, in each lattice, extract histogram feature after, the feature of different lattice be multiply by different weights, they are linked in sequence combines then, as the global feature of image.
Second extraction unit adopts Mel frequency cepstral coefficient (MFCC) feature, and this extracting mode has not only been described the characteristics of audio frequency from the frequency domain well, and compares with audio frequency characteristics such as Fourier coefficient, more near the auditory properties of people's ear.Generally in speech recognition algorithm, add the differential parameter that characterizes the voice dynamic perfromance through regular meeting in the phonetic feature of extraction, can improve the recognition performance of system.In native system, preferably extract the first order difference parameter and the second order difference parameter of MFCC parameter, thereby the audio frequency characteristics that generates can improve the accuracy rate of native system.
In addition, in order to keep the continuity of audio frequency characteristics, second extraction unit can be selected the timeslice of 0.08 second length when extracting audio frequency characteristics, and two adjacent timeslices adopt overlapping mode, overlapping length can be half of a timeslice length, promptly 0.04 second, make adjacent audio frequency characteristics that certain continuity is arranged on voice data like this, can reduce because the oversize characteristic matching accuracy rate that causes of timeslice descends.In this manner, average one second audio frequency can extract 25 audio frequency characteristics.
In summary, the selection of video features is very important, directly has influence on the speed of feature extraction, the accuracy rate and the speed of video coupling.If the computation process of feature is complicated and time consumption, expensive time of feature extraction meeting so very.The mode of said extracted video features of the present invention has not only been described video content well, and higher accuracy rate is arranged.Simultaneously, because the computing method of the length of proper vector and characteristic distance can have influence on the speed of video coupling, the video feature vector that extracts in the present embodiment is shorter, so the method for calculated characteristics distance is more succinct, thereby has improved the speed of characteristic matching.
As shown in Figure 1, the matching treatment unit 50 in the above embodiments of the present application can comprise: composite module 501, and the video features that is used for the predetermined number that will sample makes up, with the window video features in the acquisition time window; Enquiry module 502 is used in the match video content of video features storehouse inquiry with window video features coupling.Preferably, this enquiry module 502 can comprise: comparison module, be used for all window video features in window video features and video features storehouse are carried out distance relatively, wherein, under the situation of distance smaller or equal to validation value, the match is successful for the window video features, and video content and the match video content that the match is successful is saved to the video incidence relation library.
At different video type, matching treatment unit 50 in the above embodiments of the present application can be the first matching treatment unit, it also can be the second matching treatment unit, wherein, it is under the situation of complete programs video that the first matching treatment unit is used at video content, video features according to the complete programs video obtains the one or more first program fragment videos corresponding with the complete programs video in the program fragment feature database, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library.The foregoing description realized for the complete programs video, goes to mate all window feature in the program fragment feature database with each time window feature, may find many matching results at last, and this matching result is saved to the incidence relation database.
It is under the situation of program fragment video that the second matching treatment unit is used at video content, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding, and the incidence relation between the program fragment video and the first complete programs video is saved to the video incidence relation library with the program fragment video.The foregoing description is realized going to mate all window feature for the program fragment video with first time window feature in the complete programs feature database, may find a matching result at last, and this matching result is saved to the incidence relation database.
In the specific implementation process, each initiate video carries out visual classification to it, extracts feature then, if this video is the program fragment video, just mates in the complete programs feature database with its feature; If the complete programs video just mates in the program fragment feature database with its feature.If match corresponding video, just generate a new incidence relation, this incidence relation is deposited in the incidence relation library.
The foundation of characteristic matching and incidence relation library has been realized in the matching treatment unit 50 that relates in the foregoing description.Because characteristics of image and audio frequency characteristics that extraction unit 30 extracts all are proper vectors of being made up of floating number, each dimension of feature all is a floating number, and N dimension histogram feature is exactly a N floating number.The proper vector of two N dimensions is carried out distance calculation, if directly adopt Euclidean distance, has N floating number and multiplies each other and an extracting operation, and calculated amount is bigger.In order to improve feature speed relatively, can adopt chessboard distance, be about to the distance of the distance summation of each dimension as two vectors, so only need plus and minus calculation N+1 time, calculated amount reduces greatly.
As from the foregoing, at a time window s in second after the uniform sampling f feature, 50 pairs of two videos in matching treatment unit among the application mate, can be with program fragment video VideoA as the matching request videos, and complete programs video VideoB is as the video that is mated.Get a time window at the section start of VideoA, on VideoB, carry out distance calculation with the feature FeatureAt (t=0) of this time window with all time window feature FeatureBt (t=0-end) of this video, if in some time point t0 place distance less than threshold value, this threshold value is a validation value, and the match is successful with FeatureBt (t=t0) for characterization FeatureAt (t=0).If two characteristic matching case of successful occur, just further verify.Interval D t time span is evenly chosen M-1 time window feature FeatureAt (t=Dt*m after the start time point of VideoA, m=1,2...M-1), same interval D t time span is evenly chosen M-1 time window feature FeatureBt (t=t0+Dt*m after the time point t0 place of VideoB, m=1,2...M-1), with these time window features respectively correspondence mate, if all the match is successful for this M-1 time window feature, VideoA and VideoB are described, and the match is successful at time point t0 place.
Preferably, for the very big database of number of videos, the time of characteristic matching cost is longer, can also adopt following several different methods to reduce time of characteristic matching, for example: set up index, time restriction, column restriction etc.Set up index and can carry out characteristic key apace, but can make system become complicated, the renewal frequency of index also can be brought very big influence to the result.Time restriction only is meant mates the video in the time range, and Production Time, video more remote was deleted from feature database automatically, can reduce the scope of characteristic matching like this.The column restriction is meant the column attribute tags according to video, and each match video is only mated the identical video of column attribute, equally also can reduce the scope of video coupling greatly.
The match is successful if VideoA and VideoB are at time point t0 place, just generate an incidence relation (VideoA, VideoB, t0), video appears in other words in expression VideoA, and the match is successful at the t0 place of VideoB, and this incidence relation deposited in the incidence relation library, this incidence relation library is the video incidence relation library.
Install and can also comprise among above-mentioned each embodiment of the application: create feature database unit 70, be used for setting up the video features storehouse when making video content, the video features storehouse comprises: complete programs feature database and program fragment feature database.
The application is before entering video content extraction and matching treatment process, at first set up the video features storehouse, because video will be divided into two classes, complete programs video and program fragment video, therefore, the video features storehouse can comprise complete programs feature database and program fragment feature database.During visual classification, can distinguish, also can divide by the video length threshold value according to the attribute of video itself.
Device in the above embodiments of the present application can also comprise: reading unit 601 is used to read video content; The query processing unit is used for the match video content in video incidence relation library inquiry video content correspondence; Broadcast unit 602 is used for searching under the match video content case of successful match video of displaying video content correspondence.Preferably, broadcast unit 602 can comprise: first broadcast unit, and being used at video content is under the situation of program fragment video, play-overs the first complete programs video that finds; Perhaps, second broadcast unit, being used at video content is under the situation of complete programs video, play-overs the one or more first program fragment videos that find, and the form of one or more first program fragment videos with label is labeled on the progress bar of complete programs video.This mode can search the source of current video fragment fast, promptly comprises the complete video of current video fragment, brings very big facility and novel experience to the user.
The foregoing description is implemented in video when being selected to watch by the user, device can be automatically in incidence relation library this video of inquiry whether have incidence relation.If it is the program fragment video that Query Result is represented this video, to the complete programs video should be arranged, just give the user with the complete programs video display, allow the user know that current program fragment comes from that complete programs, and can select to watch complete programs.If it is the complete programs video that Query Result is represented this video, to a plurality of program fragment videos should be arranged, so just the information with these program fragment videos displays on complete programs, allow the user know that this program can be divided into a plurality of wonderfuls, each fragment can directly be located and be watched.
The method of improving user experience on video by the mode of markup tags has been arranged at present, but these labels all are to adopt manual type to generate, need editor to choose point and the content of input label on opportunity in advance, such method will spend human cost and time cost, can't use on extensive video.Concrete, in the foregoing description, after device generates the video content incidence relation automatically, automatically generate the label of excellent video simultaneously by device, be the label of each program fragment video, and this label be inserted in the complete programs video that this video tab comprises the content of the time point and the label of label, realized that time point is accurate, and can widespread use on extensive video.
Fig. 3 is the process flow diagram according to the disposal route of the matching relationship of the video content of the embodiment of the invention.This method comprises the steps: as shown in Figure 3
Step S10 obtains video content by the video type processing unit 10 among Fig. 1, and determines the video type of video content according to the parameter of video content.
Step S30 realizes extracting according to video type the video features of video content by the extraction unit among Fig. 1 30.
Step S50, realize in the video features storehouse, inquiring about the match video content corresponding by the matching treatment unit among Fig. 1 50 with video content according to video features, and generation video incidence relation library, wherein, under the situation that successfully inquires the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library, promptly the ID sign and the corresponding match video content thereof of the video content that the match is successful can be saved to the video incidence relation library, simultaneously the incidence relation between the match video content of video content and correspondence thereof be saved to the video incidence relation library.
The above embodiments of the present application realize that by matching treatment unit 50 video content repeats the foundation of incidence relation, adopt then based on video features and carry out the video content Matching Algorithm, can substitute manual working fully, the content of having set up in the match video of automatic all video contents of analyzing and processing between them repeats incidence relation, very high search efficiency is not only arranged, and the time point that inquires is very accurate, thereby substituted the incidence relation of manually setting up between the Internet video, solved that manually to carry out in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of search video by automatic match video content, saved human cost.And further can realize content-based coupling in the source that video content repeats search video fragment in the incidence relation, and content-based coupling repeats to generate automatically in the incidence relation wonderful at video content.
Fig. 4 is the process flow diagram according to the video features extracting method of the video content of the embodiment of the invention.As shown in Figure 4, among the step S10 in the foregoing description, video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of video content the video type of video content can comprise the steps:
Step S101 obtains video content, promptly to the device input video.
Step S102 determines the video type of video content by the attribute flags of checking video content, perhaps, determines the video type of video content according to the video length of video content; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during smaller or equal to second threshold value, video content is the program fragment video; When the video length of video content was between the first threshold and second threshold value, video content was the complete programs video, also be the program fragment video, and first threshold is greater than second threshold value.Embodiment in the above-mentioned steps realizes distinguishing according to the attribute of video itself type of the video content that receives, the type that also can divide this video content by the video length threshold value.
Preferably, before step S10 obtained video content, method can also comprise the steps: to make video content, and set up the video features storehouse by extracting video features, and the video features storehouse comprises: complete programs feature database and program fragment feature database.
Fig. 5 is the method flow diagram according to the establishment video incidence relation library of the embodiment of the invention, as shown in Figure 5, the video features of the extraction video content among above-mentioned steps S30 of the present invention and the step S50, and can comprise the steps: according to video features is inquired about the match video content corresponding with video content in the video features storehouse embodiment
Step S301 extracts the time window of predetermined length in video content, and in time window the video features of uniform sampling predetermined number; The video features of the predetermined number that samples is made up, with the window video features in the acquisition time window.The extracting method of the video features of this extraction video content can be applied in the process of creating video features storehouse or video incidence relation library.
Step S501, the match video content of inquiry and window video features coupling in the video features storehouse.
Preferably, inquiry in the video features storehouse in above-mentioned steps can comprise with the step of the match video content of window video features coupling: all the window video features in window video features and the video features storehouse are carried out distance relatively, wherein, under the situation of distance smaller or equal to validation value, the match is successful for the window video features.
Concrete, step S301 can adopt the mode of carrying out characteristic matching at a time window s in second, in this time window s uniform sampling f feature in second, these features is compared, and the result of comparison is as the matching result of video in this time window.The more existing technology that adopts single-frame images or single timeslice when two videos carry out characteristic matching of said method has reduced the error of matching result, makes matching effect better.
Simultaneously, in the process of above-mentioned video features sampling, because the video content types among the application comprises complete programs video and program fragment video, therefore, program fragment video among the application's step S301 extracts from the program fragment feature database, and the complete programs video extracts in the complete programs feature database.Wherein, video features comprises characteristics of image and audio frequency characteristics, and the step of extracting the characteristics of image of video content comprises: the image of video content is carried out piecemeal; Extract the characteristics of image of each piece image; The characteristics of image of each piece image correspondence is made up, to obtain characteristics of image; And the step of extracting the audio frequency characteristics of video content comprises: video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Extract the audio frequency characteristics in each timeslice interval.
Selective extraction integral image feature in the process of extraction video features, rather than local feature are arranged much owing to describe the feature of picture material.And, can select the color histogram category feature in order to guarantee the speed of feature extraction, this feature both can have been described the overall condition of image, can calculate extraction fast again.Preferably, on color space, can select the YUV color space, it is compared with the RGB color space, meets the visual characteristic of human eye more.
In addition, because the histogram feature of entire image does not comprise the positional information of image, in order to make characteristics of image comprise certain positional information, can adopt image is carried out piecemeal, then each piece is extracted histogram feature respectively, with after these characteristics combination as the global feature of image.
Concrete, in the process that two videos of realization mate in step S501, can be the matching request video with program fragment video VideoA, complete programs video VideoB is an example for the video that is mated, get a time window at the section start of VideoA, on VideoB, carry out distance calculation with the feature FeatureAt (t=0) of this time window with all time window feature FeatureBt (t=0-end) of this video, if less than threshold value, the match is successful with FeatureBt (t=t0) for characterization FeatureAt (t=0) in some time point t0 place distance.If two characteristic matching case of successful occur, just further verify.Interval D t time span is evenly chosen M-1 time window feature FeatureAt (t=Dt*m after the start time point of VideoA, m=1,2...M-1), same interval D t time span is evenly chosen M-1 time window feature FeatureBt (t=t0+Dt*m after the time point t0 place of VideoB, m=1,2...M-1), with these time window features respectively correspondence mate, if all the match is successful for this M-1 time window feature, VideoA and VideoB are described, and the match is successful at time point t0 place.
For the very big database of number of videos, the time of characteristic matching cost is longer, can adopt several different methods to reduce time of characteristic matching, for example: set up index, time restriction, column restriction etc.Set up index and can carry out characteristic key apace, but can make system become complicated, the renewal frequency of index also can be brought very big influence to the result.Time restriction only is meant mates the video in the time range, and Production Time, video more remote was deleted from feature database automatically, can reduce the scope of characteristic matching like this.The column restriction is meant the column attribute tags according to video, and each match video is only mated the identical video of column attribute, equally also can reduce the scope of video coupling greatly.
The match is successful if VideoA and VideoB are at time point t0 place, and (t0), video appears in other words in expression VideoA, and the match is successful at the t0 place of VideoB, and this incidence relation is deposited in the incidence relation library for VideoA, VideoB just to generate an incidence relation.
As shown in Figure 5, in the video features storehouse, inquiring about the match video content corresponding, and generating in the step of video incidence relation library, can realize by step S502 or step S503 with video content according to video features:
Step S502, at video content is under the situation of complete programs video, can in the program fragment feature database, obtain the first program fragment video corresponding according to the video features of complete programs video, and ID sign, the first program fragment video and the incidence relation between them of complete programs video is saved to the video incidence relation library with the complete programs video.In this step, video features in each time window of complete programs video can be mated in the program fragment feature database, to obtain the one or more first program fragment videos corresponding, may find some matching results at last for the complete programs video with the complete programs video.
Step S503, at video content is under the situation of program fragment video, video features according to the program fragment video obtains the first complete programs video corresponding with the program fragment video in the complete programs feature database, and program fragment video, the first complete programs video and the incidence relation between them are saved to the video incidence relation library.In this step, video features in first time window of program fragment video can be mated in the complete programs feature database, to obtain the first complete programs video corresponding, may only find a matching result at last for the program fragment video with the program fragment video.
In the above embodiment of the present invention, in the video features storehouse, inquire about the match video content corresponding according to video features, and generate after the video incidence relation library, can also comprise the steps: to read video content with video content at step S50; The match video content of inquiry video content correspondence in the video incidence relation library; Searching under the match video content case of successful match video of displaying video content correspondence.Preferably, searching under the match video content case of successful, can comprise in the step of the match video of displaying video content correspondence: be under the situation of program fragment video at video content, play-over the first complete programs video that finds; Perhaps, be under the situation of complete programs video at video content, play-over the one or more first program fragment videos that find, and the form of one or more first program fragment videos with label is labeled on the progress bar of complete programs video.
Fig. 6 is the process flow diagram of the video content that inquires of the broadcast according to the embodiment of the invention.As shown in Figure 6, in the above embodiments of the present application, realized the process that search of video segment source and wonderful generate automatically.When the user opens a video, device according to video unique ID number in incidence relation library, search its incidence relation.If found the complete programs video of its correspondence, just device displays complete programs; If found the program fragment video of its correspondence, just the form of the program fragment of all its associations with the wonderful label displayed; If what does not all find, just do not do any displaying, a normal play current video.
If current video is a program fragment, and find incidence relation in incidence relation library, the complete programs video of correspondence in incidence relation is exactly the source video of current program fragment so.At this time can show this complete video, and point out the current program fragment of user to come from this complete video, and provide a link inlet, allow the user can select to watch this complete video to the user to the user.
If current video is a complete programs, and in incidence relation library, find incidence relation, generalized case all can find several incidence relations so, the corresponding program fragment video of each incidence relation, these program fragment videos all come from current complete programs, and appear at the different time point of this complete programs.When showing these program fragments, the user can adopt the form of wonderful label, i.e. several labels on mark on the progress bar of current complete programs video, the corresponding program fragment video of each label, the beginning of a wonderful of expression.Can be for each label add suggestion content, suggestion content can comprise the information such as video name of wonderful video.Each label can provide an operation entry for the user, and wonderful is watched in the position that allows the user can directly jump to this label.
To sum up, the application realizes by content-based video coupling and searching algorithm by computerized algorithm, content fast automatic and that accurately set up between the magnanimity video repeats incidence relation, and utilizes two kinds of application apparatus of incidence relation design of this novelty: search of video segment source and wonderful generate automatically.These two kinds of application will bring novel experience easily for the user.
Wherein, as shown in Figure 6, fragment source video sequence search: the user is when playing a program fragment video, and device can search the source of current video automatically, the complete programs that promptly comprises current program fragment shows the user with Search Results and points out the user can play-over complete programs.
As shown in Figure 6, wonderful generates automatically: the user is when playing a complete programs video, device can be searched for all program fragments with the meaningful repetition incidence relation of current video automatically, after the arrangement screening, the form of program fragment with label is labeled on the progress bar of complete programs.Each program fragment generates a label, and as a wonderful, the prompting user can directly click label and jump to this position and watch this program fragment.As shown in the figure, the blue dot of screen below is represented the starting point mark of wonderful, and this is a kind of mode of showing wonderful.
Need to prove, can in computer installation, carry out in the step shown in the process flow diagram of accompanying drawing such as a set of computer-executable instructions, and, though there is shown logical order in flow process, but in some cases, can carry out step shown or that describe with the order that is different from herein.
As can be seen from the above description, the present invention has realized following technique effect: full-automatic, substituted manually-operated fully, and save a large amount of human costs; Speed is fast, and the time that the video coupling needs is very little, makes and can carry out the magnanimity Video processing; Accurately, can accurately locate the time point of program fragment video on the complete programs video of its correspondence; Novel user experience shows that with brand-new mode the relevance between video concerns, makes online friend user can experience the convenience that this patent brings preferably.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with the general calculation device, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation element forms, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the memory storage and carry out by calculation element, perhaps they are made into each integrated circuit modules respectively, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (19)

1. the disposal route of the matching relationship of a video content is characterized in that, comprising:
Obtain video content, and determine the video type of described video content according to the parameter of described video content;
Extract the video features of described video content according to described video type;
In the video features storehouse, inquire about the match video content corresponding according to described video features, and generate the video incidence relation library with described video content, wherein,
Under the situation that successfully inquires described match video content, the described video content that the match is successful and the incidence relation between the described match video content are saved to described video incidence relation library.
2. method according to claim 1 is characterized in that, described video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of described video content the video type of described video content comprises:
Obtain described video content;
Determine the video type of described video content by the attribute flags of verifying described video content, perhaps,
Determine the video type of described video content according to the video length of described video content; Wherein, when the described video length of described video content during more than or equal to first threshold, described video content is described complete programs video; When the described video length of described video content during smaller or equal to described second threshold value, described video content is described program fragment video, and described first threshold is greater than described second threshold value.
3. method according to claim 2 is characterized in that, extracts the video features of described video content, and inquires about the match video content corresponding with described video content according to described video features in the video features storehouse and comprise:
In described video content, extract the time window of one or more predetermined lengths;
The video features of uniform sampling predetermined number in described time window;
The video features of the described predetermined number that samples is made up, to obtain the window video features in the described time window;
The described match video content of inquiry and described window video features coupling in the video features storehouse.
4. method according to claim 3 is characterized in that, inquiry comprises with the described match video content of described window video features coupling in the video features storehouse:
All window video features in described window video features and the described video features storehouse are carried out distance relatively, and wherein, under the situation of described distance smaller or equal to validation value, the match is successful for described window video features.
5. method according to claim 3 is characterized in that described video features comprises characteristics of image and audio frequency characteristics, wherein,
The step of extracting the described characteristics of image of described video content comprises: the image of described video content is carried out piecemeal;
Extract the characteristics of image of each piece image; The described characteristics of image of each piece image correspondence is made up, to obtain described characteristics of image; And
The step of extracting the described audio frequency characteristics of described video content comprises: described video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent described timeslice overlaid; Extract the audio frequency characteristics in each described timeslice interval.
6. according to each described method among the claim 1-5, it is characterized in that before obtaining video content, described method also comprises:
Make described video content obtaining video file, and set up described video features storehouse, described video features storehouse comprises: complete programs feature database and program fragment feature database.
7. method according to claim 6 is characterized in that, inquires about the match video content corresponding with described video content according to described video features in the video features storehouse, and generation video incidence relation library comprises:
At described video content is under the situation of described complete programs video, video features in each described time window of described complete programs video is mated in described program fragment feature database, obtaining the one or more first program fragment videos corresponding, and the incidence relation between described complete programs video and each the described first program fragment video is saved to described video incidence relation library with described complete programs video; Perhaps,
At described video content is under the situation of described program fragment video, video features in first time window of described program fragment video is mated in described complete programs feature database, obtaining the first complete programs video corresponding, and the incidence relation between described program fragment video and the described first complete programs video is saved to described video incidence relation library with described program fragment video.
8. method according to claim 7 is characterized in that, is inquiring about the match video content corresponding with described video content according to described video features in the video features storehouse, and generates after the video incidence relation library, and described method also comprises:
Read described video content;
The match video content of the described video content correspondence of inquiry in the video incidence relation library;
Searching under the described match video content case of successful, playing the described match video of described video content correspondence.
9. method according to claim 8 is characterized in that, is searching under the described match video content case of successful, and the described match video of playing described video content correspondence comprises:
At described video content is under the situation of described program fragment video, play-overs the described first complete programs video that finds; Perhaps,
At described video content is under the situation of described complete programs video, play-over the one or more described first program fragment video that finds, and the form of one or more described first program fragment videos with label is labeled on the progress bar of described complete programs video.
10. the treating apparatus of the matching relationship of a video content is characterized in that, comprising:
The video type processing unit is used for determining according to the parameter of the video content that gets access to the video type of described video content;
Extraction unit is used for extracting according to described video type the video features of described video content;
The matching treatment unit, be used for according to described video features in the video features storehouse inquiry match video content corresponding with described video content, and generation video incidence relation library, wherein, under the situation that successfully inquires described match video content, the described video content that the match is successful and the incidence relation between the described match video content are saved to described video incidence relation library.
11. device according to claim 10 is characterized in that, described video type processing unit comprises:
Receiver module is used to obtain described video content;
Authentication module is used for determining by the attribute flags of verifying described video content the video type of described video content,
Perhaps, determine the video type of described video content according to the video length of described video content, described video type comprises: complete programs video and program fragment video; Wherein, when the described video length of described video content during more than or equal to first threshold, described video content is described complete programs video; When the described video length of described video content during smaller or equal to described second threshold value, described video content is described program fragment video, and described first threshold is greater than described second threshold value.
12. device according to claim 10 is characterized in that, described extraction unit comprises:
Acquisition module is used for the time window at the one or more predetermined lengths of described video content extraction;
Sampling module is used for the video features of uniform sampling predetermined number in described time window.
13. device according to claim 12 is characterized in that, described matching treatment unit comprises:
Composite module, the video features that is used for the described predetermined number that will sample makes up, to obtain the window video features in the described time window;
Enquiry module is used in the described match video content of video features storehouse inquiry with described window video features coupling.
14. device according to claim 13 is characterized in that, described enquiry module comprises:
Comparison module is used for all window video features in described window video features and described video features storehouse are carried out distance relatively, and wherein, under the situation of described distance smaller or equal to validation value, the match is successful for described window video features.
15. device according to claim 10 is characterized in that, described extraction unit is first extraction unit or second extraction unit, wherein,
Described first extraction unit is used to extract the described characteristics of image of described video content, and described first extraction unit comprises:
First divides module, is used for the image of described video content is carried out piecemeal;
First acquisition module is used to extract the described characteristics of image of each piece image;
Second composite module is used for the described characteristics of image of each piece image correspondence is made up, to obtain described characteristics of image; And
Described second extraction unit is used to extract the described audio frequency characteristics of described video content, and described first extraction unit comprises:
Second divides module, be used for described video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent described timeslice overlaid;
Second acquisition module is used to extract the audio frequency characteristics in each described timeslice interval.
16., it is characterized in that described device also comprises according to each described device among the claim 10-15:
Create the feature database unit, be used for setting up described video features storehouse when making described video content, described video features storehouse comprises: complete programs feature database and program fragment feature database.
17. device according to claim 10 is characterized in that, described matching treatment unit comprises the first matching treatment unit or the second matching treatment unit, wherein,
It is under the situation of described complete programs video that the described first matching treatment unit is used at described video content, video features in each described time window of described complete programs video is mated in described program fragment feature database, obtaining the one or more first program fragment videos corresponding, and the incidence relation between described complete programs video and each the described first program fragment video is saved to described video incidence relation library with described complete programs video;
It is under the situation of described program fragment video that the described second matching treatment unit is used at described video content, video features in first time window of described program fragment video is mated in described complete programs feature database, obtaining the first complete programs video corresponding, and the incidence relation between described program fragment video and the described first complete programs video is saved to described video incidence relation library with described program fragment video.
18. device according to claim 10 is characterized in that, described device also comprises:
Reading unit is used to read described video content;
The query processing unit is used for the match video content in the described video content correspondence of video incidence relation library inquiry;
Broadcast unit is used for searching under the described match video content case of successful, plays the described match video of described video content correspondence.
19. device according to claim 18 is characterized in that, described broadcast unit comprises:
First broadcast unit, being used at described video content is under the situation of described program fragment video, play-overs the described first complete programs video that finds; Perhaps,
Second broadcast unit, being used at described video content is under the situation of described complete programs video, play-over the one or more described first program fragment video that finds, and the form of one or more described first program fragment videos with label is labeled on the progress bar of described complete programs video.
CN 201110169978 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content Active CN102222103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110169978 CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110169978 CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Publications (2)

Publication Number Publication Date
CN102222103A true CN102222103A (en) 2011-10-19
CN102222103B CN102222103B (en) 2013-03-27

Family

ID=44778655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110169978 Active CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Country Status (1)

Country Link
CN (1) CN102222103B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595206A (en) * 2012-02-24 2012-07-18 央视国际网络有限公司 Data synchronization method and device based on sport event video
CN102883127A (en) * 2012-09-21 2013-01-16 浙江宇视科技有限公司 Method and device for slicing video
CN102932693A (en) * 2012-11-09 2013-02-13 北京邮电大学 Method and device for prefetching video-frequency band
CN103596016A (en) * 2013-11-20 2014-02-19 韩巍 Multimedia video data processing method and device
CN104410906A (en) * 2014-11-18 2015-03-11 北京国双科技有限公司 Detection method and detection device for video playing behavior
WO2015061979A1 (en) * 2013-10-30 2015-05-07 宇龙计算机通信科技(深圳)有限公司 Terminal and method for managing video file
CN104657376A (en) * 2013-11-20 2015-05-27 航天信息股份有限公司 Searching method and searching device for video programs based on program relationship
CN104994426A (en) * 2014-07-07 2015-10-21 Tcl集团股份有限公司 Method and system of program video recognition
CN105376627A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Video data source playback method, device and system
CN105472407A (en) * 2015-12-15 2016-04-06 北京网博视界科技股份有限公司 Automatic video index and alignment method based on continuous image features
WO2016101256A1 (en) * 2014-12-24 2016-06-30 深圳Tcl数字技术有限公司 Video matching method and device
CN105872586A (en) * 2016-04-01 2016-08-17 成都掌中全景信息技术有限公司 Real time video identification method based on real time video streaming collection
CN106484891A (en) * 2016-10-18 2017-03-08 网易(杭州)网络有限公司 Game video-recording and playback data retrieval method and system
CN106601243A (en) * 2015-10-20 2017-04-26 阿里巴巴集团控股有限公司 Video file identification method and device
CN107426610A (en) * 2017-03-29 2017-12-01 聚好看科技股份有限公司 Video information synchronous method and device
CN107734387A (en) * 2017-10-25 2018-02-23 北京网博视界科技股份有限公司 A kind of video cutting method, device, terminal and storage medium
CN108337925A (en) * 2015-01-30 2018-07-27 构造数据有限责任公司 The method for the option that video clip and display are watched from alternate source and/or on alternate device for identification
CN110121079A (en) * 2019-05-13 2019-08-13 北京百度网讯科技有限公司 Method for processing video frequency, device, computer equipment and storage medium
CN110134829A (en) * 2019-04-28 2019-08-16 腾讯科技(深圳)有限公司 Video locating method and device, storage medium and electronic device
CN110263220A (en) * 2019-06-28 2019-09-20 北京奇艺世纪科技有限公司 A kind of video highlight segment recognition methods and device
CN110598014A (en) * 2019-09-27 2019-12-20 腾讯科技(深圳)有限公司 Multimedia data processing method, device and storage medium
WO2020007082A1 (en) * 2018-07-04 2020-01-09 北京字节跳动网络技术有限公司 Video playback processing method, terminal device, server, and storage medium
WO2020007083A1 (en) * 2018-07-04 2020-01-09 北京字节跳动网络技术有限公司 Method and apparatus for processing information associated with video, electronic device, and storage medium
CN110781348A (en) * 2019-10-25 2020-02-11 北京威晟艾德尔科技有限公司 Video file analysis method
CN111246313A (en) * 2018-11-28 2020-06-05 北京字节跳动网络技术有限公司 Video association method and device, server, terminal equipment and storage medium
CN111814922A (en) * 2020-09-07 2020-10-23 成都索贝数码科技股份有限公司 Video clip content matching method based on deep learning
CN112203115A (en) * 2020-10-10 2021-01-08 腾讯科技(深圳)有限公司 Video identification method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101064846A (en) * 2007-05-24 2007-10-31 上海交通大学 Time-shifted television video matching method combining program content metadata and content analysis
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101064846A (en) * 2007-05-24 2007-10-31 上海交通大学 Time-shifted television video matching method combining program content metadata and content analysis
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595206B (en) * 2012-02-24 2014-07-02 央视国际网络有限公司 Data synchronization method and device based on sport event video
CN102595206A (en) * 2012-02-24 2012-07-18 央视国际网络有限公司 Data synchronization method and device based on sport event video
CN102883127B (en) * 2012-09-21 2016-05-11 浙江宇视科技有限公司 A kind of method and apparatus of the section of recording a video
CN102883127A (en) * 2012-09-21 2013-01-16 浙江宇视科技有限公司 Method and device for slicing video
CN102932693A (en) * 2012-11-09 2013-02-13 北京邮电大学 Method and device for prefetching video-frequency band
CN102932693B (en) * 2012-11-09 2015-06-10 北京邮电大学 Method and device for prefetching video-frequency band
US10229323B2 (en) 2013-10-30 2019-03-12 Yulong Computer Telecommunications Scientific (Shenzhen) Co., Ltd. Terminal and method for managing video file
WO2015061979A1 (en) * 2013-10-30 2015-05-07 宇龙计算机通信科技(深圳)有限公司 Terminal and method for managing video file
CN104657376B (en) * 2013-11-20 2018-09-18 航天信息股份有限公司 The searching method and device of video frequency program based on program relationship
CN104657376A (en) * 2013-11-20 2015-05-27 航天信息股份有限公司 Searching method and searching device for video programs based on program relationship
CN103596016A (en) * 2013-11-20 2014-02-19 韩巍 Multimedia video data processing method and device
CN103596016B (en) * 2013-11-20 2018-04-13 韩巍 A kind of multimedia video data treating method and apparatus
CN104994426A (en) * 2014-07-07 2015-10-21 Tcl集团股份有限公司 Method and system of program video recognition
CN104994426B (en) * 2014-07-07 2020-07-21 Tcl科技集团股份有限公司 Program video identification method and system
CN105376627A (en) * 2014-08-25 2016-03-02 中兴通讯股份有限公司 Video data source playback method, device and system
CN105376627B (en) * 2014-08-25 2019-10-11 南京中兴软件有限责任公司 Film source playback method, apparatus and system
CN104410906A (en) * 2014-11-18 2015-03-11 北京国双科技有限公司 Detection method and detection device for video playing behavior
WO2016101256A1 (en) * 2014-12-24 2016-06-30 深圳Tcl数字技术有限公司 Video matching method and device
CN108337925B (en) * 2015-01-30 2024-02-27 构造数据有限责任公司 Method for identifying video clips and displaying options viewed from alternative sources and/or on alternative devices
CN108337925A (en) * 2015-01-30 2018-07-27 构造数据有限责任公司 The method for the option that video clip and display are watched from alternate source and/or on alternate device for identification
CN106601243A (en) * 2015-10-20 2017-04-26 阿里巴巴集团控股有限公司 Video file identification method and device
CN106601243B (en) * 2015-10-20 2020-11-06 阿里巴巴集团控股有限公司 Video file identification method and device
CN105472407A (en) * 2015-12-15 2016-04-06 北京网博视界科技股份有限公司 Automatic video index and alignment method based on continuous image features
CN105872586A (en) * 2016-04-01 2016-08-17 成都掌中全景信息技术有限公司 Real time video identification method based on real time video streaming collection
CN106484891A (en) * 2016-10-18 2017-03-08 网易(杭州)网络有限公司 Game video-recording and playback data retrieval method and system
CN107426610A (en) * 2017-03-29 2017-12-01 聚好看科技股份有限公司 Video information synchronous method and device
CN107734387A (en) * 2017-10-25 2018-02-23 北京网博视界科技股份有限公司 A kind of video cutting method, device, terminal and storage medium
CN107734387B (en) * 2017-10-25 2020-11-24 北京网博视界科技股份有限公司 Video cutting method, device, terminal and storage medium
US11463776B2 (en) 2018-07-04 2022-10-04 Beijing Bytedance Network Technology Co., Ltd. Video playback processing method, terminal device, server, and storage medium
CN110691281B (en) * 2018-07-04 2022-04-01 北京字节跳动网络技术有限公司 Video playing processing method, terminal device, server and storage medium
WO2020007082A1 (en) * 2018-07-04 2020-01-09 北京字节跳动网络技术有限公司 Video playback processing method, terminal device, server, and storage medium
WO2020007083A1 (en) * 2018-07-04 2020-01-09 北京字节跳动网络技术有限公司 Method and apparatus for processing information associated with video, electronic device, and storage medium
CN110691281A (en) * 2018-07-04 2020-01-14 北京字节跳动网络技术有限公司 Video playing processing method, terminal device, server and storage medium
CN110691256A (en) * 2018-07-04 2020-01-14 北京字节跳动网络技术有限公司 Video associated information processing method and device, server and storage medium
US11250267B2 (en) 2018-07-04 2022-02-15 Beijing Bytedance Network Technology Co., Ltd. Method and apparatus for processing information associated with video, electronic device, and storage medium
CN111246313A (en) * 2018-11-28 2020-06-05 北京字节跳动网络技术有限公司 Video association method and device, server, terminal equipment and storage medium
CN110134829B (en) * 2019-04-28 2021-12-07 腾讯科技(深圳)有限公司 Video positioning method and device, storage medium and electronic device
CN110134829A (en) * 2019-04-28 2019-08-16 腾讯科技(深圳)有限公司 Video locating method and device, storage medium and electronic device
CN110121079A (en) * 2019-05-13 2019-08-13 北京百度网讯科技有限公司 Method for processing video frequency, device, computer equipment and storage medium
CN110263220A (en) * 2019-06-28 2019-09-20 北京奇艺世纪科技有限公司 A kind of video highlight segment recognition methods and device
CN110598014B (en) * 2019-09-27 2021-12-10 腾讯科技(深圳)有限公司 Multimedia data processing method, device and storage medium
CN110598014A (en) * 2019-09-27 2019-12-20 腾讯科技(深圳)有限公司 Multimedia data processing method, device and storage medium
CN110781348A (en) * 2019-10-25 2020-02-11 北京威晟艾德尔科技有限公司 Video file analysis method
CN111814922A (en) * 2020-09-07 2020-10-23 成都索贝数码科技股份有限公司 Video clip content matching method based on deep learning
CN112203115A (en) * 2020-10-10 2021-01-08 腾讯科技(深圳)有限公司 Video identification method and related device
CN112203115B (en) * 2020-10-10 2023-03-10 腾讯科技(深圳)有限公司 Video identification method and related device

Also Published As

Publication number Publication date
CN102222103B (en) 2013-03-27

Similar Documents

Publication Publication Date Title
CN102222103B (en) Method and device for processing matching relationship of video content
US11048752B2 (en) Estimating social interest in time-based media
CN110351578B (en) Method and system for automatically producing video programs according to scripts
CN102342124B (en) Method and apparatus for providing information related to broadcast programs
CN110134931B (en) Medium title generation method, medium title generation device, electronic equipment and readable medium
KR20190139751A (en) Method and apparatus for processing video
CN106021496A (en) Video search method and video search device
CN111274442B (en) Method for determining video tag, server and storage medium
US20090141988A1 (en) System and method of object recognition and database population for video indexing
CN109511015B (en) Multimedia resource recommendation method, device, storage medium and equipment
CN104504109A (en) Image search method and device
CN101821734A (en) Detection and classification of matches between time-based media
CN104918060B (en) The selection method and device of point position are inserted in a kind of video ads
CN111314732A (en) Method for determining video label, server and storage medium
CN116415017B (en) Advertisement sensitive content auditing method and system based on artificial intelligence
CN104965897A (en) Information recommendation method and device
CN110889034A (en) Data analysis method and data analysis system
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
CN117558296B (en) Determination method and device for target audio recognition model and computing equipment
KR102126839B1 (en) System for searching country-by-country literary works based on deep learning
Hanjalic et al. Indexing and retrieval of TV broadcast news using DANCERS
CN117835004A (en) Method, apparatus and computer readable medium for generating video viewpoints
CN117880443A (en) Script-based multi-mode feature matching video editing method and system
CN116992078A (en) Video tag determination method, device, equipment, storage medium and product
CN114845149A (en) Editing method of video clip, video recommendation method, device, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20111019

Assignee: CCTV INTERNATIONAL NETWORKS WUXI CO., LTD.

Assignor: CCTV International Networks Co., Ltd.

Contract record no.: 2014990000103

Denomination of invention: Method and device for processing matching relationship of video content

Granted publication date: 20130327

License type: Exclusive License

Record date: 20140303

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model