CN106231356A - The treating method and apparatus of video - Google Patents
The treating method and apparatus of video Download PDFInfo
- Publication number
- CN106231356A CN106231356A CN201610682592.9A CN201610682592A CN106231356A CN 106231356 A CN106231356 A CN 106231356A CN 201610682592 A CN201610682592 A CN 201610682592A CN 106231356 A CN106231356 A CN 106231356A
- Authority
- CN
- China
- Prior art keywords
- video
- matrix
- target
- finger print
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Abstract
The invention discloses the treating method and apparatus of a kind of video.Wherein, the method includes: obtain the key frame of target video;The target coding sequence of described key frame is extracted according to described key frame expressing information on multi-scale transform territory, wherein, described target coding sequence is for representing described target video image sequence based on space-time characterisation, using described target coding sequence as the video finger print of described target video;Described video finger print is utilized to compare with the video finger print of video in reference video storehouse;If it is consistent with the video finger print of the first video in described reference video storehouse that comparison goes out described video finger print, it is determined that going out described target video with described first video is identical video.The present invention solves and utilizes video finger print inaccurate technical problem when identifying video.
Description
Technical field
The present invention relates to field of video processing, in particular to the treating method and apparatus of a kind of video.
Background technology
Along with the development of Internet technology, thousands of video is uploaded to network.But, suitable in these videos
Quantity is to illegally copy, or the revision of existing media.This video copy infringement widely makes video copy management exist
Become a complicated process on the Internet, greatly accelerate the development requiring copy detection algorithm fast and accurately recently.Depending on
Frequency copy detection mission determines that whether a given video is the duplication version of video data.In general, video finger print
It is that video copy detection is protected unauthorized use digital video by most widely used method.Video finger print is substantially one
Signature based on content, this signature is from original video fragment, and such video finger print can represent video at many small-sized efficients
Search and matching process.The most traditional video fingerprinting algorithms can include color space, time according to the feature extracted
And space.This several method has the place of deficiency, and such as, for the first kind, fingerprint based on color space is mainly derived from
Time in the region of color histogram and/or space video, and RGB image generally changes into YUV, and (Y represents lightness, the most just
It it is grey decision-making;What U and V represented is colourity, and effect is to describe to affect color and saturation, for the color of specified pixel) and LAB
Color space.But, when using color space to process video, the video of black and white does not has color, just cannot be empty with color
Between method extract video finger print, cause utilizing video finger print inaccurate technical problem when identifying video.
For above-mentioned problem, effective solution is the most not yet proposed.
Summary of the invention
Embodiments provide the treating method and apparatus of a kind of video, at least to solve to utilize video finger print knowing
Inaccurate technical problem during other video.
An aspect according to embodiments of the present invention, it is provided that the processing method of a kind of video, including: obtain target video
Key frame;The target code sequence of described key frame is extracted according to described key frame expressing information on multi-scale transform territory
Row, wherein, described target coding sequence is for representing described target video image sequence based on space-time characterisation, by described target
Coded sequence is as the video finger print of described target video;Described video finger print is utilized to refer to the video of video in reference video storehouse
Stricture of vagina is compared;If it is consistent, then with the video finger print of the first video in described reference video storehouse that comparison goes out described video finger print
Determine that described target video and described first video are identical video.
Another aspect according to embodiments of the present invention, additionally provides the processing means of a kind of video, including: acquiring unit,
For obtaining the key frame of target video;First extraction unit, for according to described key frame table on multi-scale transform territory
Reaching the target coding sequence of key frame described in information retrieval, wherein, described target coding sequence is used for representing described target video
Image sequence based on space-time characterisation, using described target coding sequence as the video finger print of described target video;Comparing unit,
For utilizing the video finger print of video in described video finger print and reference video storehouse to compare;Determine unit, in comparison
Go out described video finger print consistent with the video finger print of the first video in described reference video storehouse time, determine described target video
It is identical video with described first video
In embodiments of the present invention, the key frame of target video is obtained;According to described key frame on multi-scale transform territory
Expressing information extract described key frame target coding sequence, wherein, described target coding sequence is used for representing described target
Video image sequence based on space-time characterisation, using described target coding sequence as the video finger print of described target video;Utilize
Described video finger print is compared with the video finger print of video in reference video storehouse;If comparison goes out described video finger print and described ginseng
The video finger print examining the first video in video library is consistent, it is determined that it is identical for going out described target video with described first video
Video.Owing to this video finger print is video characteristic in the transform domain as illustrated, do not affected by the color of picture, therefore, regarding of black and white
Frequency also can obtain the video finger print of the key frame of this video so that obtains video finger print and can express target video accurately, solves
Prior art of having determined is expressed the inaccurate problem of target video, has reached the effect of accurate expression target video.Further, expand
Obtain the scope of the video file of video finger print so that aforesaid way can be with wider application in various video.
Accompanying drawing explanation
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this
Bright schematic description and description is used for explaining the present invention, is not intended that inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the schematic diagram of a kind of network architecture according to embodiments of the present invention;
Fig. 2 is the flow chart of the processing method of video according to embodiments of the present invention;
Fig. 3 is the schematic diagram of exploded view picture according to embodiments of the present invention;
Fig. 4 is the schematic diagram of the test image of 4 yardsticks according to embodiments of the present invention;
Fig. 5 is the schematic diagram that carrying out according to embodiments of the present invention is decomposed and encode;
Fig. 6 is the flow chart of video identification according to embodiments of the present invention;
Fig. 7 is the schematic diagram of the processing means of video according to embodiments of the present invention;
Fig. 8 is the schematic diagram of server according to embodiments of the present invention.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with in the embodiment of the present invention
Accompanying drawing, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only
The embodiment of a present invention part rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people
The every other embodiment that member is obtained under not making creative work premise, all should belong to the model of present invention protection
Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, "
Two " it is etc. for distinguishing similar object, without being used for describing specific order or precedence.Should be appreciated that so use
Data can exchange in the appropriate case, in order to embodiments of the invention described herein can with except here diagram or
Order beyond those described is implemented.Additionally, term " includes " and " having " and their any deformation, it is intended that cover
Cover non-exclusive comprising, such as, contain series of steps or the process of unit, method, system, product or equipment are not necessarily limited to
Those steps clearly listed or unit, but can include the most clearly listing or for these processes, method, product
Or intrinsic other step of equipment or unit.
Embodiment 1
According to embodiments of the present invention, it is provided that a kind of embodiment of the method that can be performed by the application device embodiment,
It should be noted that can be in the department of computer science of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing
System performs, and, although show logical order in flow charts, but in some cases, can be to be different from herein
Order perform shown or described by step.
According to embodiments of the present invention, it is provided that the processing method of a kind of video.
Alternatively, in the present embodiment, the processing method of above-mentioned video can apply to terminal 102 He as shown in Figure 1
In the hardware environment that server 104 is constituted.As it is shown in figure 1, terminal 102 is attached with server 104 by network, above-mentioned
Network includes but not limited to: mobile communications network, wide area network, Metropolitan Area Network (MAN) or LAN, and terminal 102 can be mobile phone terminal, also
Can be PC terminal, notebook terminal or panel computer terminal.
Fig. 2 is the flow chart of the processing method of video according to embodiments of the present invention, implements the present invention below in conjunction with Fig. 2
The processing method of the video that example is provided does concrete introduction, as in figure 2 it is shown, the processing method of this video mainly includes walking as follows
Rapid:
Step S202, obtains the key frame of target video.Before identifying video, the key of target video to be determined
Frame.The key frame of target video can use time-domain information representative image algorithm (Temporal Information
Representative Image, referred to as TIRI) extract.Concrete extracting method describes in detail later.
Step S204, extracts the target code sequence of key frame according to key frame expressing information on multi-scale transform territory
Row, wherein, target coding sequence for representing target video image sequence based on space-time characterisation, using target coding sequence as
The video finger print of target video.
In the present embodiment, wavelet transformation (Stationary Wavelet Tranform is called for short SWT) is stablized in employing
Obtain target coding sequence.In general, if a common image is deteriorated, the linear relationship on low yardstick can be protected
Stay, but can be disturbed on high yardstick, especially fine yardstick.Based on this, SWT has the characteristic of multiscale analysis, and
And, SWT can overcome the TIME SHIFT INVARIANCE of wavelet transform (Discrete Wave Transform is called for short DWT).Specifically
Ground, TIME SHIFT INVARIANCE uses the filter design algorithm without down-sampling.
Fig. 3 shows that SWT's realizes process.As it is shown on figure 3, first to input image carry out two passages without down-sampling point
Solve, then recurrence will decompose after image again decompose, resolve into the image of low pass and the image of high pass.According to following
Under conditions of can be relatively easy to obtain above-mentioned decomposition:
H0(z)G0(z)+H1(z)G1(z)=1, wherein, H0(z) and H1Z () is low pass filter and high pass filter respectively
Transforming function transformation function.
Fig. 4 shows the test image of 4 yardsticks of SWT.Wherein, image S4 is the high pass in first time decomposition process
Image, image S1 is the low-pass pictures in decomposing the last time.Substantially, S1 yardstick to S4 yardstick divides table to represent different frequencies
The information (low frequency) of the low yardstick of section and the more details information (high frequency) of high yardstick.
Below in conjunction with Fig. 5, the extraction target coding sequence of the present embodiment is illustrated.
First, to the image f inputted, (m, n) carries out SWT decomposition, obtains low-pass pictures and high-pass image.Wherein, low pass figure
As using H0Z () expresses, high-pass image H1Z () expresses.(m n) is the key frame in the present embodiment, this pass to the image f of input
Key frame uses the expression matrix of N × N.
Then, to low-pass pictures H obtained after decomposing0Z () decomposes again, be similarly obtained high-pass image H1(z2) and
Low-pass pictures H0(z2).To low-pass pictures H0(z2) again decompose, obtain high-pass image H1(z4) and low-pass pictures H0(z4)。
After completing above-mentioned three times and decomposing, obtain low-pass pictures H on S1 yardstick0(z4), high-pass image on S2 yardstick
H1(z4), high-pass image H on S3 yardstick1(z2) and S4 yardstick on high-pass image H1(z)。
To the figure expressed on the image (the i.e. second exploded view picture) expressed on the S2 yardstick in above-mentioned 4 yardsticks and S1 yardstick
As (the i.e. first exploded view picture) carries out down-sampling.
That is, the target coding sequence extracting key frame includes: utilization is stablized wavelet transformation and decomposed key frame, obtains
The the first exploded view picture expressed on S1 yardstick and the second exploded view picture expressed on S2 yardstick;First exploded view picture is carried out
Coded treatment obtains the first coded sequence, the second exploded view picture is carried out coded treatment and obtains the second coded sequence;Connect first
Coded sequence and the second coded sequence, obtain target coding sequence.
That is, the first exploded view picture is carried out coded treatment and obtain the first coded sequence, the second exploded view picture is encoded
Process obtains the second coded sequence and includes: the first exploded view picture carries out down-sampling and processes the figure obtaining using the first matrix to represent
Picture, carries out down-sampling and processes the image obtaining using the second matrix to represent the second exploded view picture;First matrix is carried out Hash
Coding obtains the first coded sequence, the second matrix carries out Hash coding and obtains the second coded sequence.
Specifically, during decomposing, the matrix expressing the first exploded view picture and the second exploded view picture is always N
× N, after down-sampling, expresses the first exploded view picture by expressing on S2 yardstick on the matrix of the second exploded view picture and S1 yardstick
Matrix become the matrix of less M × K.Specifically, the image on S1 yardstick is become M1×K1Matrix (the i.e. first matrix) enters
Row is expressed, and the image on S2 yardstick is become M2×K2Matrix (the i.e. second matrix) is expressed.M under normal circumstances1With K1Equal
Or, M2With K2Equal or.Use M in the present embodiment1=K1=M2=K2=M.
Image after down-sampling is carried out Hash coding.That is, the first matrix is carried out Hash coding and obtains the first code sequence
Row include: obtain the intermediate value of the first matrix;Judge each element and the difference of intermediate value in the first matrix;By difference in the first matrix
It is set to 1 more than the value of the element corresponding to 0, difference in the first matrix is set to 0 less than the value of the element corresponding to 0;Connect successively
Connecing adjacent two element, obtain the first coded sequence, wherein, the first coded sequence is the combination of 0,1.
Specifically, the mode phase that the Hash of the Hash coded system of the image on S1 yardstick and the image on S2 yardstick encodes
With.Illustrate as a example by image on S1 yardstick.
Obtain the first matrix M1×K1Intermediate value.Using this intermediate value as threshold values, and use M1×K1In each element and this
Individual threshold values compares, and for those element values more than the element of threshold values, the value of these elements is replaced with 1;For those units
The value of these elements, less than the element of threshold values, is replaced with 0 by element value.Through above-mentioned comparison procedure, by M1×K1It is transformed to by 0 and 1
The matrix of composition.Due in the present embodiment, use M1=K1=M2=K2=M, therefore, the matrix obtained is the matrix of M × M.
Image on S2 yardstick is done same conversion, and here is omitted.
The matrix of M × M is carried out snake scan, as it is shown in figure 5, according to the unit in the scanning direction matrix of the arrow of Fig. 5
Element, becomes by matrix into one-dimensional sequence.The matrix of the M × M on S1 yardstick is carried out snake scan, obtains the first coded sequence;
The matrix of the M × M on S2 yardstick is carried out snake scan, obtains the second coded sequence.By the first coded sequence and the second coding
Sequence links together, and has just obtained target coding sequence.This target coding sequence can serve as the video of target video and refers to
Stricture of vagina.The mode connected can be the first coded sequence and the second coded sequence joins end to end.
During Selection parameter, on S1 yardstick, the length of Hash coding should be more than 31 bits, because, in this length
Can obtain than relatively low rate of false alarm (false positive rate) down.Based on this, above-mentioned input picture (key frame)
N=128 in matrix, expresses M=6 in the first matrix of image and the second matrix on S1 yardstick and S2 yardstick after down-sampling.Therefore,
First coded sequence and a length of 36 bits of the second coded sequence, therefore, a length of 72 bits of target coding sequence.
Step S206, utilizes video finger print to compare with the video finger print of video in reference video storehouse.
Step S208, if to go out video finger print consistent, the most really with the video finger print of the first video in reference video storehouse in comparison
Make target video and the first video is identical video.
Video in reference video storehouse is also adopted by said method and obtains video finger print, owing to this video finger print is that video is becoming
Changing the characteristic in territory, do not affected by the color of picture, therefore, the video of black and white also can obtain the video of the key frame of this video
Fingerprint so that obtain video finger print and can express target video accurately, solves expression target video in prior art and is forbidden
True problem, has reached the effect of accurate expression target video.Further, the model of the video file obtaining video finger print is expanded
Enclose so that aforesaid way can be with wider application in various video.
After getting video finger print, it is possible to use video finger print judges whether target video is the video copied, and also may be used
With by comparing the information such as the title of determining target video afterwards with the video in reference video storehouse, video finger print can also be applied
In addition to copyright detection, video identification, it is also possible to quote in other scenes such as video classification.Detect in copyright individually below
Illustrate with under two scenes of video identification.In following scene, aforesaid way is all used to obtain video finger print.
Scene one:
After determining that target video and the first video are identical video, send the first instruction information, wherein, first
Instruction information is for the copied files indicating target video to be the first video.
Reference video storehouse stores non-copied video, certain video in comparing target video and reference video storehouse
When there is identical video finger print, determine that this target video is for copy video.Owing to the video in reference video storehouse is non-copying
Shellfish video, and these videos can be played out by video playback application program, but can not be by video playback application journey
Sequence is downloaded, and therefore, the target video with the video in this reference video storehouse with same video fingerprint copies video exactly.
The copyright protection to video file can be realized based on this, illustrate below in conjunction with Fig. 6.
S601, stores legal video the most protected by copyright in reference video storehouse.
S602, each video in reference video storehouse has video finger print.Carrying of video finger print in reference video storehouse
Access method is identical with the step of said extracted video finger print, repeats no more.
S603, carries out Hash coding to video finger print.
S604, by Hash code storage to data base.
S605, obtains the video segment of target video, and this video segment can be above-mentioned key frame.
S606, extracts the video finger print of video segment.
S607, carries out Hash coding to the video finger print of video segment.
S608, the video of video in the reference video storehouse that the video finger print of search and video segment matches in data base
Fingerprint.
S609, after matching video finger print, determines that the video belonging to this video segment is not for having copyrighted copy video.
In the system shown in Fig. 6, use TIRI (Temporal Informative Representative Image)
Obtaining video segment, i.e. obtain the key frame of target video, TIRI can obtain the time domain specification of video, utilizes this time domain special
Property generate video finger print.
Therefore, needing to be divided into video image some during early stage processes, each part has S figure
Picture.For one edition, TIRI is worth to by calculating the average weight of this S image.Substantially, TIRI is a broad image,
This image includes movable information that may be present in video sequence.The process generating TIRI can use equation below:
Wherein, p(x,y,i)Be in S image i-th image (x, y) brightness value on th axle, w is a weighter factor,
Can be constant, linear number and index.Experimental data shows, index can have the ability preferably obtaining movable information.Cause
This, at the CBCD shown in Fig. 6, (Content-based Copy Detection System, copy based on content detection is
System) in, index uses wi=βi, and β=0.65.In the CBCD system shown in Fig. 6, when obtaining video finger print, use this enforcement
The acquisition methods of the video finger print that example provides obtains.
In the present embodiment, healthy and strong CBCD system reaches precision (resolution) and the balance of recall rate (robustness).This
Embodiment uses FλThe performance of CBCD system, wherein, F is weighed as an aggregative indicatorλIt is defined as:
Wherein, λ is the weight of the combination of precision and recall rate, and p represents that precision, r represent recall rate.In the present embodiment,
The balance of precision and recall rate can be obtained when λ takes 1.
Further, the copy of detection and positioning video fragment is two major functions of CBCD system.Detect is main
Purpose is to detect the reference video fragment of any duplication, and the main purpose of location is the video that section coupling is replicated in location.CBCD system
The detection error rate of system is less than 0.01%.
The generally accuracy rate of location copy video segment can reflect the detection performance of system, and may determine that how many
Location Request is accurately positioned, and in the present embodiment, the Average Accuracy of CBCD system is about 98%.
Scene two:
After determining that target video and the first video are identical video, extract the first video in reference video storehouse
Video name and the broadcast path of the first video;Pushing video title and broadcast path.
The mode of pushing video title and broadcast path can apply instant communication applications client, videoconference client and
Net cast client etc., in being uploaded onto the server by one section of video (target video) of shooting, the mesh that server by utilizing is uploaded
Mark video searches video, if finding target video with certain video in reference video is identical regarding in reference video storehouse
Frequently, title and the broadcast path of the first video in reference video can be pushed.
By video playback client or browser client etc. can play the application client of video come by
The complete video corresponding to target video is play according to above-mentioned broadcast path.So, the fragment of a video is seen still user
When being not aware that title and the broadcast path of this video, the complete video corresponding to the fragment of this video can be play.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because
According to the present invention, some step can use other orders or carry out simultaneously.Secondly, those skilled in the art also should know
Knowing, embodiment described in this description belongs to preferred embodiment, involved action and the module not necessarily present invention
Necessary.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive according to above-mentioned enforcement
The method of example can add the mode of required general hardware platform by software and realize, naturally it is also possible to by hardware, but a lot
In the case of the former is more preferably embodiment.Based on such understanding, technical scheme is the most in other words to existing
The part that technology contributes can embody with the form of software product, and this computer software product is stored in a storage
In medium (such as ROM/RAM, magnetic disc, CD), including some instructions with so that a station terminal equipment (can be mobile phone, calculate
Machine, server, or the network equipment etc.) perform the method described in each embodiment of the present invention.
Embodiment 2
According to embodiments of the present invention, the process dress of the video of a kind of processing method for implementing above-mentioned video is additionally provided
Putting, the processing means of this video is mainly used in performing the processing method of the video that embodiment of the present invention foregoing is provided, with
Under the processing means of video that the embodiment of the present invention is provided do concrete introduction:
Fig. 7 is the schematic diagram of the processing means of video according to embodiments of the present invention, as it is shown in fig. 7, the process of this video
Device specifically includes that
Acquiring unit 10 is for obtaining the key frame of target video.Before identifying video, target video to be determined
Key frame.The key frame of target video can use time-domain information representative image algorithm (Temporal Information
Representative Image, referred to as TIRI) extract.
First extraction unit 20 for extracting the mesh of key frame according to key frame expressing information on multi-scale transform territory
Mark coded sequence, wherein, target coding sequence is for representing target video image sequence based on space-time characterisation, by target code
Sequence is as the video finger print of target video.
In the present embodiment, wavelet transformation (Stationary Wavelet Tranform is called for short SWT) is stablized in employing
Obtain target coding sequence.In general, if a common image is deteriorated, the linear relationship on low yardstick can be protected
Stay, but can be disturbed on high yardstick, the most high-precision high yardstick.Based on this, SWT has the characteristic of multiscale analysis, and
And, SWT can overcome the TIME SHIFT INVARIANCE of wavelet transform (Discrete Wave Transform is called for short DWT).Specifically
Ground, TIME SHIFT INVARIANCE uses the filter design algorithm without down-sampling.
Fig. 3 shows that SWT's realizes process.As it is shown on figure 3, first to input image carry out two passages without down-sampling point
Solve, then recurrence will decompose after image again decompose, resolve into the image of low pass and the image of high pass.According to following
Under conditions of can be relatively easy to obtain above-mentioned decomposition:
H0(z)G0(z)+H1(z)G1(z)=1, wherein, H0(z) and H1Z () is low pass filter and high pass filter respectively
Transforming function transformation function.
Fig. 4 shows the test image of 4 yardsticks of SWT.Wherein, image S4 is the high pass in first time decomposition process
Image, image S1 is the low-pass pictures in decomposing the last time.Substantially, S1 yardstick to S4 yardstick divides table to represent different frequencies
The information (low frequency) of the low yardstick of section and the more details information (high frequency) of high yardstick.
Below in conjunction with Fig. 5, the extraction target coding sequence of the present embodiment is illustrated.
First, to the image f inputted, (m, n) carries out SWT decomposition, obtains low-pass pictures and high-pass image.Wherein, low pass figure
As using H0Z () expresses, high-pass image H1Z () expresses.(m n) is the key frame in the present embodiment, this pass to the image f of input
Key frame uses the expression matrix of N × N.
Then, to low-pass pictures H obtained after decomposing0Z () decomposes again, be similarly obtained high-pass image H1(z2) and
Low-pass pictures H0(z2).To low-pass pictures H0(z2) again decompose, obtain high-pass image H1(z4) and low-pass pictures H0(z4)。
After completing above-mentioned three times and decomposing, obtain low-pass pictures H on S1 yardstick0(z4), high-pass image on S2 yardstick
H1(z4), high-pass image H on S3 yardstick1(z2) and S4 yardstick on high-pass image H1(z)。
To the figure expressed on the image (the i.e. second exploded view picture) expressed on the S2 yardstick in above-mentioned 4 yardsticks and S1 yardstick
As (the i.e. first exploded view picture) carries out down-sampling.
Alternatively, extraction unit includes: decomposes subelement, is used for utilizing stable wavelet transformation to decompose key frame,
Obtain the first exploded view picture expressed on S1 yardstick and the second exploded view picture expressed on S2 yardstick;Process subelement, use
Carry out coded treatment in the first exploded view picture and obtain the first coded sequence, the second exploded view picture is carried out coded treatment and obtains second
Coded sequence;Connexon unit, for connecting the first coded sequence and the second coded sequence, obtains target coding sequence.
Alternatively, processing subelement and include: sampling module, being adopted for the first exploded view picture being carried out down-sampling process
The image represented with the first matrix, carries out down-sampling and processes the image obtaining using the second matrix to represent the second exploded view picture;
Coding module, obtains the first coded sequence for the first matrix carries out Hash coding, the second matrix is carried out Hash and encodes
To the second coded sequence.
Specifically, during decomposing, the matrix expressing the first exploded view picture and the second exploded view picture is always N
× N, after down-sampling, upper on the matrix of the second exploded view picture and S1 yardstick expresses the first exploded view by expressing on S2 yardstick
The matrix of picture becomes the matrix of less M × K.Specifically, the image on S1 yardstick is become M1×K1Matrix (the i.e. first matrix)
Express, the image on S2 yardstick is become M2×K2Matrix (the i.e. second matrix) is expressed.M under normal circumstances1With K1Phase
Deng or, M2With K2Equal or.Use M in the present embodiment1=K1=M2=K2=M.
Image after down-sampling is carried out Hash coding.That is, coding module includes: obtain submodule, for acquisition first
The intermediate value of matrix;Judge submodule, for judging each element and the difference of intermediate value in the first matrix;Submodule is set, is used for
Difference in first matrix is set to 1 more than the value of the element corresponding to 0, by difference in the first matrix less than the element corresponding to 0
Value be set to 0;Connexon module, for being sequentially connected with adjacent two element, obtains the first coded sequence, and wherein, first compiles
Code sequence is the combination of 0,1.
Specifically, the mode phase that the Hash of the Hash coded system of the image on S1 yardstick and the image on S2 yardstick encodes
With.Illustrate as a example by image on S1 yardstick.
Obtain the first matrix M1×K1Intermediate value.Using this intermediate value as threshold values, and use M1×K1In each element and this
Threshold values compares, and for those element values more than the element of threshold values, the value of these elements is replaced with 1;For those elements
The value of these elements, less than the element of threshold values, is replaced with 0 by value.Through above-mentioned comparison procedure, by M1×K1It is transformed to by 0 and 1 group
The matrix become.Due in the present embodiment, use M1=K1=M2=K2=M, therefore, the matrix obtained is the matrix of M × M.
Image on S2 yardstick is done same conversion, and here is omitted.
The matrix of M × M is carried out snake scan, as it is shown in figure 5, according to the unit in the scanning direction matrix of the arrow of Fig. 5
Element, becomes by matrix into one-dimensional sequence.The matrix of the M × M on S1 yardstick is carried out snake scan, obtains the first coded sequence;
The matrix of the M × M on S2 yardstick is carried out snake scan, obtains the second coded sequence.By the first coded sequence and the second coding
Sequence links together, and has just obtained target coding sequence.This target coding sequence can serve as the video of target video and refers to
Stricture of vagina.The mode connected can be the first coded sequence and the second coded sequence joins end to end.
During Selection parameter, on S1 yardstick, the length of Hash coding should be more than 31 bits, because, in this length
Can obtain than relatively low rate of false alarm (false positive rate) down.Based on this, above-mentioned input picture (key frame)
N=128 in matrix, expresses M=6 in the first matrix of image and the second matrix on S1 yardstick and S2 yardstick after down-sampling.Therefore,
First coded sequence and a length of 36 bits of the second coded sequence, therefore, a length of 72 bits of target coding sequence.
Comparing unit 30 is for utilizing video finger print to compare with the video finger print of video in reference video storehouse.
Determine that unit 40 is consistent with the video finger print of the first video in reference video storehouse for going out video finger print in comparison
Time, determine that target video and the first video are identical video.
Video in reference video storehouse is also adopted by said method and obtains video finger print, owing to this video finger print is that video is becoming
Changing the characteristic in territory, do not affected by the color of picture, therefore, the video of black and white also can obtain the video of the key frame of this video
Fingerprint so that obtain video finger print and can express target video accurately, solves expression target video in prior art and is forbidden
True problem, has reached the effect of accurate expression target video.Further, the model of the video file obtaining video finger print is expanded
Enclose so that aforesaid way can be with wider application in various video.
Alternatively, device also includes: issue unit, for determining that target video and the first video are identical video
Afterwards, sending the first instruction information, wherein, the first instruction information is for the copied files indicating target video to be the first video.
Reference video storehouse stores non-copied video, has identical comparing target video with certain video in reference video storehouse
Video finger print time, determine this target video for copy video.Owing to the video in reference video storehouse is non-copied video, and
These videos can be played out by video playback application program, but under can not being carried out by video playback application program
Carrying, therefore, the target video with the video in this reference video storehouse with same video fingerprint copies video exactly.
Alternatively, device also includes: the second extraction unit, for determining that target video and the first video are identical
After video, extract video name and the broadcast path of the first video of the first video in reference video storehouse;Push unit, is used for
Pushing video title and broadcast path.
The mode of pushing video title and broadcast path can apply instant communication applications client, videoconference client and
Net cast client etc., in being uploaded onto the server by one section of video (target video) of shooting, the mesh that server by utilizing is uploaded
Mark video searches video, if finding target video with certain video in reference video is identical regarding in reference video storehouse
Frequently, title and the broadcast path of the first video in reference video can be pushed.
By video playback client or browser client etc. can play the application client of video come by
The complete video corresponding to target video is play according to above-mentioned broadcast path.So, the fragment of a video is seen still user
When being not aware that title and the broadcast path of this video, the complete video corresponding to the fragment of this video can be play.
Embodiment 3
According to embodiments of the present invention, the server of a kind of processing method for implementing above-mentioned video is additionally provided, such as Fig. 8
Shown in, this server mainly includes processor 801, data-interface 803, memorizer 805 and network interface 807, wherein:
Video segment (the target that third party's instrument is then mainly got by the way of data are transmitted by data-interface 803
Video) it is transferred to processor 801.
Memorizer 805 is mainly used in storing the video in reference video storehouse and target video.
Network interface 807 is mainly used in carrying out network service with terminal, receives the reference video that terminal gathers.
Processor 801 is mainly used in performing to operate as follows: obtain the key frame of target video;According to described key frame many
Expressing information on change of scale territory extracts the target coding sequence of described key frame, and wherein, described target coding sequence is used for
Represent described target video image sequence based on space-time characterisation, using described target coding sequence regarding as described target video
Frequently fingerprint;Described video finger print is utilized to compare with the video finger print of video in reference video storehouse;If comparison goes out described video
Fingerprint is consistent with the video finger print of the first video in described reference video storehouse, it is determined that go out described target video and described first
Video is identical video.
Processor 801 is additionally operable to extract described key frame according to described key frame expressing information on multi-scale transform territory
Target coding sequence include: utilize and stablize wavelet transformation described key frame is decomposed, obtain expression on S1 yardstick
First exploded view picture and the second exploded view picture expressed on S2 yardstick;Described first exploded view picture is carried out coded treatment obtain
First coded sequence, carries out coded treatment to described second exploded view picture and obtains the second coded sequence;Connect described first coding
Sequence and described second coded sequence, obtain target coding sequence.
Processor 801 is additionally operable to that described first exploded view picture is carried out coded treatment and obtains the first coded sequence, to described
Second exploded view picture carries out coded treatment and obtains the second coded sequence and include: described first exploded view picture is carried out down-sampling process
Obtain the image using the first matrix to represent, described second matrix is carried out down-sampling process and obtains using the second matrix to represent
Image;Described first matrix is carried out Hash coding and obtains described first coded sequence, described second matrix is carried out Hash volume
Code obtains described second coded sequence.
Processor 801 is additionally operable to carry out described first matrix Hash coding and obtains described first coded sequence and include: obtain
Take the intermediate value of described first matrix;Judge each element and the difference of described intermediate value in described first matrix;By described first square
Difference described in Zhen is set to 1 more than the value of the described element corresponding to 0, by right less than 0 institute for difference described in described first matrix
The value of the described element answered is set to 0;It is sequentially connected with adjacent two element, obtains described first coded sequence, wherein, described
One coded sequence is the combination of 0,1.
Processor 801 is additionally operable to after determining that described target video is identical video with described first video, institute
Method of stating also includes: sending the first instruction information, wherein, it is described that described first instruction information is used for indicating described target video
The copied files of the first video.
Processor 801 is additionally operable to after determining that described target video is identical video with described first video, institute
Method of stating also includes: extract video name and the broadcasting road of described first video of the first video described in described reference video storehouse
Footpath;Push described video name and described broadcast path.
Alternatively, the concrete example in the present embodiment is referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment does not repeats them here.
Embodiment 4
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, above-mentioned storage medium can
Program code for the processing method of the video storing the embodiment of the present invention.
Alternatively, in the present embodiment, above-mentioned storage medium may be located at mobile communications network, wide area network, Metropolitan Area Network (MAN) or
At least one network equipment in multiple network equipments in the network of LAN.
Alternatively, in the present embodiment, storage medium is arranged to storage for the program code performing following steps:
S1, obtains the key frame of target video;
S2, extracts the target code sequence of described key frame according to described key frame expressing information on multi-scale transform territory
Row;
S3, utilizes described video finger print to compare with the video finger print of video in reference video storehouse;
S4, if to go out described video finger print consistent, then with the video finger print of the first video in described reference video storehouse in comparison
Determine that described target video and described first video are identical video.
Alternatively, in the present embodiment, above-mentioned storage medium can include but not limited to: USB flash disk, read only memory (ROM,
Read Only Memory), random access memory (RAM, Random Access Memory), portable hard drive, magnetic disc or
The various medium that can store program code such as CD.
Alternatively, in the present embodiment, processor performs according to described according to the program code stored in storage medium
Key frame expressing information on multi-scale transform territory is extracted the target coding sequence of described key frame and is included: utilizes and stablizes small echo
Described key frame is decomposed by conversion, obtains the first exploded view picture expressed on S1 yardstick and express on S2 yardstick the
Two exploded view pictures;Described first exploded view picture is carried out coded treatment and obtains the first coded sequence, to described second exploded view picture
Carry out coded treatment and obtain the second coded sequence;Connect described first coded sequence and described second coded sequence, obtain target
Coded sequence.
Alternatively, in the present embodiment, processor performs described the according to the program code that stored in storage medium
One exploded view picture carries out down-sampling and processes the image obtaining using the first matrix to represent, carries out described second matrix at down-sampling
Reason obtains the image using the second matrix to represent;Described first matrix is carried out Hash coding and obtains described first coded sequence,
Described second matrix is carried out Hash coding and obtains described second coded sequence.
Alternatively, in the present embodiment, processor performs described in acquisition according to the program code stored in storage medium
The intermediate value of the first matrix;Judge each element and the difference of described intermediate value in described first matrix;By institute in described first matrix
State difference and be set to 1 more than the value of the described element corresponding to 0, by difference described in described first matrix less than the institute corresponding to 0
The value stating element is set to 0;It is sequentially connected with adjacent two element, obtains described first coded sequence, wherein, described first coding
Sequence is the combination of 0,1.
Alternatively, in the present embodiment, processor performs to send first according to the program code stored in storage medium
Instruction information, wherein, described first instruction information is for the copied files indicating described target video to be described first video.
Alternatively, in the present embodiment, processor performs determining according to the program code stored in storage medium
After described target video and described first video are identical video, described method also includes: extract described reference video storehouse
Described in the video name of the first video and the broadcast path of described first video;Push described video name and described broadcasting road
Footpath.
Alternatively, the concrete example in the present embodiment is referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment does not repeats them here.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
If the integrated unit in above-described embodiment realizes and as independent product using the form of SFU software functional unit
When selling or use, can be stored in the storage medium that above computer can read.Based on such understanding, the skill of the present invention
Part that prior art is contributed by art scheme the most in other words or this technical scheme completely or partially can be with soft
The form of part product embodies, and this computer software product is stored in storage medium, including some instructions with so that one
Platform or multiple stage computer equipment (can be for personal computer, server or the network equipment etc.) perform each embodiment institute of the present invention
State all or part of step of method.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not has in certain embodiment
The part described in detail, may refer to the associated description of other embodiments.
In several embodiments provided herein, it should be understood that disclosed client, can be by other side
Formula realizes.Wherein, device embodiment described above is only schematically, the division of the most described unit, and the most only one
Kind of logic function divides, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or
It is desirably integrated into another system, or some features can be ignored, or do not perform.Another point, shown or discussed mutual it
Between coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, unit or module or communication link
Connect, can be being electrical or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme
's.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated list
Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (12)
1. the processing method of a video, it is characterised in that including:
Obtain the key frame of target video;
The target coding sequence of described key frame is extracted according to described key frame expressing information on multi-scale transform territory, its
In, described target coding sequence is for representing described target video image sequence based on space-time characterisation, by described target code
Sequence is as the video finger print of described target video;
Described video finger print is utilized to compare with the video finger print of video in reference video storehouse;
If it is consistent with the video finger print of the first video in described reference video storehouse that comparison goes out described video finger print, it is determined that goes out institute
Stating target video with described first video is identical video.
Method the most according to claim 1, it is characterised in that according to the expression on multi-scale transform territory of the described key frame
The target coding sequence of key frame described in information retrieval includes:
Utilize and stablize wavelet transformation described key frame is decomposed, obtain expression on S1 yardstick the first exploded view picture and
The the second exploded view picture expressed on S2 yardstick;
Described first exploded view picture is carried out coded treatment and obtains the first coded sequence, described second exploded view picture is encoded
Process obtains the second coded sequence;
Connect described first coded sequence and described second coded sequence, obtain target coding sequence.
Method the most according to claim 2, it is characterised in that described first exploded view picture is carried out coded treatment and obtains
One coded sequence, carries out coded treatment and obtains the second coded sequence and include described second exploded view picture:
Described first exploded view picture is carried out down-sampling and processes the image obtaining using the first matrix to represent, decompose described second
Image carries out down-sampling and processes the image obtaining using the second matrix to represent;
Described first matrix is carried out Hash coding and obtains described first coded sequence, described second matrix is carried out Hash coding
Obtain described second coded sequence.
Method the most according to claim 3, it is characterised in that described first matrix is carried out Hash coding and obtains described the
One coded sequence includes:
Obtain the intermediate value of described first matrix;
Judge each element and the difference of described intermediate value in described first matrix;
Difference described in described first matrix is set to 1, by institute in described first matrix more than the value of the described element corresponding to 0
State difference and be set to 0 less than the value of the described element corresponding to 0;
Being sequentially connected with adjacent two element, obtain described first coded sequence, wherein, described first coded sequence is 0,1
Combination.
Method the most according to claim 1, it is characterised in that determining that described target video with described first video is
After identical video, described method also includes:
Sending the first instruction information, wherein, it is described first video that described first instruction information is used for indicating described target video
Copied files.
Method the most according to claim 1, it is characterised in that determining that described target video with described first video is
After identical video, described method also includes:
Extract video name and the broadcast path of described first video of the first video described in described reference video storehouse;
Push described video name and described broadcast path.
7. the processing means of a video, it is characterised in that including:
Acquiring unit, for obtaining the key frame of target video;
First extraction unit, for extracting described key frame according to described key frame expressing information on multi-scale transform territory
Target coding sequence, wherein, described target coding sequence is used for representing described target video image sequence based on space-time characterisation,
Using described target coding sequence as the video finger print of described target video;
Comparing unit, for utilizing described video finger print to compare with the video finger print of video in reference video storehouse;
Determine unit, for going out the video finger print one of described video finger print and the first video in described reference video storehouse in comparison
During cause, determine that described target video and described first video are identical video.
Device the most according to claim 7, it is characterised in that described extraction unit includes:
Decompose subelement, be used for utilizing stable wavelet transformation that described key frame is decomposed, obtain expression on S1 yardstick
First exploded view picture and the second exploded view picture expressed on S2 yardstick;
Process subelement, obtain the first coded sequence, to described second for described first exploded view picture is carried out coded treatment
Exploded view picture carries out coded treatment and obtains the second coded sequence;
Connexon unit, is used for connecting described first coded sequence and described second coded sequence, obtains target coding sequence.
Device the most according to claim 8, it is characterised in that described process subelement includes:
Sampling module, the image obtaining using the first matrix to represent for described first exploded view picture is carried out down-sampling process,
Described second exploded view picture is carried out down-sampling and processes the image obtaining using the second matrix to represent;
Coding module, obtains described first coded sequence, to described second square for described first matrix carries out Hash coding
Battle array carries out Hash coding and obtains described second coded sequence.
Device the most according to claim 9, it is characterised in that described coding module includes:
Obtain submodule, for obtaining the intermediate value of described first matrix;
Judge submodule, for judging each element and the difference of described intermediate value in described first matrix;
Submodule is set, for difference described in described first matrix is set to 1 more than the value of the described element corresponding to 0, will
Difference described in described first matrix is set to 0 less than the value of the described element corresponding to 0;
Connexon module, for being sequentially connected with adjacent two element, obtains described first coded sequence, wherein, described first
Coded sequence is the combination of 0,1.
11. devices according to claim 7, it is characterised in that described device also includes:
Issue unit, for, after determining that described target video is identical video with described first video, sending first
Instruction information, wherein, described first instruction information is for the copied files indicating described target video to be described first video.
12. devices according to claim 7, it is characterised in that described device also includes:
Second extraction unit, for, after determining that described target video is identical video with described first video, extracting
The video name of the first video described in described reference video storehouse and the broadcast path of described first video;
Push unit, is used for pushing described video name and described broadcast path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610682592.9A CN106231356B (en) | 2016-08-17 | 2016-08-17 | The treating method and apparatus of video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610682592.9A CN106231356B (en) | 2016-08-17 | 2016-08-17 | The treating method and apparatus of video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106231356A true CN106231356A (en) | 2016-12-14 |
CN106231356B CN106231356B (en) | 2019-01-08 |
Family
ID=57553339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610682592.9A Active CN106231356B (en) | 2016-08-17 | 2016-08-17 | The treating method and apparatus of video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106231356B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989856A (en) * | 2018-06-19 | 2018-12-11 | 康佳集团股份有限公司 | Processing method, terminal and medium based on short video acquisition positive associated data |
CN109255777A (en) * | 2018-07-27 | 2019-01-22 | 昆明理工大学 | A kind of image similarity calculation method of combination wavelet transformation and perceptual hash algorithm |
CN109788309A (en) * | 2018-12-25 | 2019-05-21 | 陕西优米数据技术有限公司 | Video file piracy detection method and system based on block chain technology |
CN109857907A (en) * | 2019-02-25 | 2019-06-07 | 百度在线网络技术(北京)有限公司 | Video locating method and device |
CN110222594A (en) * | 2019-05-20 | 2019-09-10 | 厦门能见易判信息科技有限公司 | Pirate video recognition methods and system |
CN112104870A (en) * | 2020-11-17 | 2020-12-18 | 南京世泽科技有限公司 | Method and system for improving security of ultra-low time delay encoder |
CN112203115A (en) * | 2020-10-10 | 2021-01-08 | 腾讯科技(深圳)有限公司 | Video identification method and related device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101442641A (en) * | 2008-11-21 | 2009-05-27 | 清华大学 | Method and system for monitoring video copy based on content |
CN101855635A (en) * | 2007-10-05 | 2010-10-06 | 杜比实验室特许公司 | Media fingerprints that reliably correspond to media content |
CN103226571A (en) * | 2013-03-26 | 2013-07-31 | 天脉聚源(北京)传媒科技有限公司 | Method and device for detecting repeatability of advertisement library |
CN103279473A (en) * | 2013-04-10 | 2013-09-04 | 深圳康佳通信科技有限公司 | Method, system and mobile terminal for searching massive amounts of video content |
US20130259361A1 (en) * | 2008-08-20 | 2013-10-03 | Sri International | Content-based matching of videos using local spatio-temporal fingerprints |
CN104142984A (en) * | 2014-07-18 | 2014-11-12 | 电子科技大学 | Video fingerprint retrieval method based on coarse and fine granularity |
CN104504121A (en) * | 2014-12-29 | 2015-04-08 | 北京奇艺世纪科技有限公司 | Video retrieval method and device |
-
2016
- 2016-08-17 CN CN201610682592.9A patent/CN106231356B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101855635A (en) * | 2007-10-05 | 2010-10-06 | 杜比实验室特许公司 | Media fingerprints that reliably correspond to media content |
US20130259361A1 (en) * | 2008-08-20 | 2013-10-03 | Sri International | Content-based matching of videos using local spatio-temporal fingerprints |
CN101442641A (en) * | 2008-11-21 | 2009-05-27 | 清华大学 | Method and system for monitoring video copy based on content |
CN103226571A (en) * | 2013-03-26 | 2013-07-31 | 天脉聚源(北京)传媒科技有限公司 | Method and device for detecting repeatability of advertisement library |
CN103279473A (en) * | 2013-04-10 | 2013-09-04 | 深圳康佳通信科技有限公司 | Method, system and mobile terminal for searching massive amounts of video content |
CN104142984A (en) * | 2014-07-18 | 2014-11-12 | 电子科技大学 | Video fingerprint retrieval method based on coarse and fine granularity |
CN104504121A (en) * | 2014-12-29 | 2015-04-08 | 北京奇艺世纪科技有限公司 | Video retrieval method and device |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989856A (en) * | 2018-06-19 | 2018-12-11 | 康佳集团股份有限公司 | Processing method, terminal and medium based on short video acquisition positive associated data |
CN109255777A (en) * | 2018-07-27 | 2019-01-22 | 昆明理工大学 | A kind of image similarity calculation method of combination wavelet transformation and perceptual hash algorithm |
CN109255777B (en) * | 2018-07-27 | 2021-10-22 | 昆明理工大学 | Image similarity calculation method combining wavelet transformation and perceptual hash algorithm |
CN109788309A (en) * | 2018-12-25 | 2019-05-21 | 陕西优米数据技术有限公司 | Video file piracy detection method and system based on block chain technology |
CN109857907A (en) * | 2019-02-25 | 2019-06-07 | 百度在线网络技术(北京)有限公司 | Video locating method and device |
CN109857907B (en) * | 2019-02-25 | 2021-11-30 | 百度在线网络技术(北京)有限公司 | Video positioning method and device |
CN110222594A (en) * | 2019-05-20 | 2019-09-10 | 厦门能见易判信息科技有限公司 | Pirate video recognition methods and system |
CN110222594B (en) * | 2019-05-20 | 2021-11-16 | 厦门能见易判信息科技有限公司 | Pirated video identification method and system |
CN112203115A (en) * | 2020-10-10 | 2021-01-08 | 腾讯科技(深圳)有限公司 | Video identification method and related device |
CN112203115B (en) * | 2020-10-10 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Video identification method and related device |
CN112104870A (en) * | 2020-11-17 | 2020-12-18 | 南京世泽科技有限公司 | Method and system for improving security of ultra-low time delay encoder |
CN112104870B (en) * | 2020-11-17 | 2021-04-06 | 南京世泽科技有限公司 | Method and system for improving security of ultra-low time delay encoder |
Also Published As
Publication number | Publication date |
---|---|
CN106231356B (en) | 2019-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106231356A (en) | The treating method and apparatus of video | |
CN101038592B (en) | Method and apparatus for representing a group of images | |
Tsai et al. | Location coding for mobile image retrieval | |
Ulutas et al. | Frame duplication/mirroring detection method with binary features | |
Xie et al. | Bag-of-words feature representation for blind image quality assessment with local quantized pattern | |
Chen et al. | JSNet: a simulation network of JPEG lossy compression and restoration for robust image watermarking against JPEG attack | |
Kumar et al. | Near lossless image compression using parallel fractal texture identification | |
Akhtar et al. | Digital video tampering detection and localization: Review, representations, challenges and algorithm | |
Jalali et al. | A new steganography algorithm based on video sparse representation | |
CN114140708A (en) | Video processing method, device and computer readable storage medium | |
CN109874018A (en) | Image encoding method, system, terminal and storage medium neural network based | |
Teng et al. | Image indexing and retrieval based on vector quantization | |
Bhuyan et al. | Development of secrete images in image transferring system | |
Yang et al. | No‐reference image quality assessment via structural information fluctuation | |
Zhang et al. | A semi-feature learning approach for tampered region localization across multi-format images | |
He et al. | A novel two-dimensional reversible data hiding scheme based on high-efficiency histogram shifting for JPEG images | |
Ghodhbani et al. | Depth-based color stereo images retrieval using joint multivariate statistical models | |
Cemiloglu et al. | Blind video quality assessment via spatiotemporal statistical analysis of adaptive cube size 3D‐DCT coefficients | |
WO2023118317A1 (en) | Method and data processing system for lossy image or video encoding, transmission and decoding | |
Shankar et al. | Moderate embed cross validated and feature reduced Steganalysis using principal component analysis in spatial and transform domain with Support Vector Machine and Support Vector Machine-Particle Swarm Optimization | |
CN113128278A (en) | Image identification method and device | |
Michaylov | Exploring the Use of Steganography and Steganalysis in Forensic Investigations for Analysing Digital Evidence | |
Tyagi et al. | ForensicNet: Modern convolutional neural network‐based image forgery detection network | |
Taya et al. | Detecting tampered region in video using LSTM and U‐Net | |
Liu et al. | Blind Image Quality Assessment Based on Mutual Information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |