CN106231356B - The treating method and apparatus of video - Google Patents

The treating method and apparatus of video Download PDF

Info

Publication number
CN106231356B
CN106231356B CN201610682592.9A CN201610682592A CN106231356B CN 106231356 B CN106231356 B CN 106231356B CN 201610682592 A CN201610682592 A CN 201610682592A CN 106231356 B CN106231356 B CN 106231356B
Authority
CN
China
Prior art keywords
video
target
image
matrix
finger print
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610682592.9A
Other languages
Chinese (zh)
Other versions
CN106231356A (en
Inventor
布礼文
徐敘遠
简伟华
黄嘉文
袁方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610682592.9A priority Critical patent/CN106231356B/en
Publication of CN106231356A publication Critical patent/CN106231356A/en
Application granted granted Critical
Publication of CN106231356B publication Critical patent/CN106231356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

The invention discloses a kind for the treatment of method and apparatus of video.Wherein, this method comprises: obtaining the key frame of target video;The target coding sequence of the key frame is extracted according to expressing information of the key frame on multi-scale transform domain, wherein, the target coding sequence is for indicating image sequence of the target video based on space-time characterisation, using the target coding sequence as the video finger print of the target video;It is compared using the video finger print with the video finger print of video in reference video library;If it is consistent with the video finger print of the first video in the reference video library to compare out the video finger print, it is determined that go out the target video and first video is identical video.The present invention solves the technical problem of the inaccuracy when identifying video using video finger print.

Description

The treating method and apparatus of video
Technical field
The present invention relates to field of video processing, in particular to a kind for the treatment of method and apparatus of video.
Background technique
With the development of internet technology, thousands of video is uploaded to network.However, suitable in these videos Quantity is the revision of illegal copies or existing media.This extensive video copy is invaded so that video copy management exists Become a complicated process on internet, greatly accelerates the development for requiring copy detection algorithm fast and accurately recently.Depending on Frequency copy detection mission be determine given video whether be video data duplication version.In general, video finger print It is that most widely used method protects unauthorized use digital video to video copy detection.Video finger print is substantially one Signature based on content, the signature come from original video segment, and such video finger print can represent video in many small-sized efficients Search and matching process.Most traditional video fingerprinting algorithms can include color space, time according to the feature of extraction The space and.These types of method has insufficient place, for example, the fingerprint based on color space is mainly derived from for the first kind Time and/or space video in the region of color histogram, and RGB image usually change into YUV (Y indicate brightness, also It is grayscale value;What U and V was indicated is coloration, and effect is that description influences color and saturation degree, the color for specified pixel) and LAB Color space.But when being handled using color space video, the video of black and white does not have color, just can not be empty with color Between method extract video finger print, lead to the technical problem inaccurate when identify video using video finger print.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the invention provides a kind for the treatment of method and apparatus of video, at least to solve knowing using video finger print The technical problem of inaccuracy when other video.
According to an aspect of an embodiment of the present invention, a kind of processing method of video is provided, comprising: obtain target video Key frame;The target code sequence of the key frame is extracted according to expressing information of the key frame on multi-scale transform domain Column, wherein the target coding sequence is for indicating image sequence of the target video based on space-time characterisation, by the target Video finger print of the coded sequence as the target video;Video using the video finger print and video in reference video library refers to Line is compared;If it is consistent with the video finger print of the first video in the reference video library to compare out the video finger print, It determines the target video and first video is identical video.
According to another aspect of an embodiment of the present invention, a kind of processing unit of video is additionally provided, comprising: acquiring unit, For obtaining the key frame of target video;First extraction unit, for the table according to the key frame on multi-scale transform domain Up to the target coding sequence of key frame described in information extraction, wherein the target coding sequence is for indicating the target video Image sequence based on space-time characterisation, using the target coding sequence as the video finger print of the target video;Comparing unit, For being compared using the video finger print with the video finger print of video in reference video library;Determination unit, for comparing When the video finger print is consistent with the video finger print of the first video in the reference video library out, the target video is determined It is identical video with first video
In embodiments of the present invention, the key frame of target video is obtained;According to the key frame on multi-scale transform domain Expressing information extract the target coding sequence of the key frame, wherein the target coding sequence is for indicating the target Image sequence of the video based on space-time characterisation, using the target coding sequence as the video finger print of the target video;It utilizes The video finger print is compared with the video finger print of video in reference video library;If comparing out the video finger print and the ginseng The video finger print for examining the first video in video library is consistent, it is determined that it is identical for going out the target video with first video Video.Since the video finger print is the characteristic of video in the transform domain as illustrated, do not influenced by the color of picture, therefore, the view of black and white Frequency can also obtain the video finger print of the key frame of the video, allow acquisition video finger print accurately to express target video, solve It has determined and has expressed the problem of target video inaccuracy in the prior art, achieved the effect that accurate expression target video.Also, expand The range for obtaining the video file of video finger print, applies aforesaid way widely in various video.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is a kind of schematic diagram of network architecture according to an embodiment of the present invention;
Fig. 2 is the flow chart of the processing method of video according to an embodiment of the present invention;
Fig. 3 is the schematic diagram according to an embodiment of the present invention for decomposing image;
Fig. 4 is the schematic diagram of the test image of 4 scales according to an embodiment of the present invention;
Fig. 5 is the schematic diagram according to an embodiment of the present invention for being decomposed and being encoded;
Fig. 6 is the flow chart of video identification according to an embodiment of the present invention;
Fig. 7 is the schematic diagram of the processing unit of video according to an embodiment of the present invention;
Fig. 8 is the schematic diagram of server according to an embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Embodiment 1
According to embodiments of the present invention, a kind of embodiment of the method that can be executed by the application Installation practice is provided, It should be noted that step shown in the flowchart of the accompanying drawings can be in the department of computer science of such as a group of computer-executable instructions It is executed in system, although also, logical order is shown in flow charts, and it in some cases, can be to be different from herein Sequence execute shown or described step.
According to embodiments of the present invention, a kind of processing method of video is provided.
Optionally, in the present embodiment, the processing method of above-mentioned video can be applied to 102 He of terminal as shown in Figure 1 In the hardware environment that server 104 is constituted.As shown in Figure 1, terminal 102 is attached by network with server 104, it is above-mentioned Network includes but is not limited to: mobile communications network, wide area network, Metropolitan Area Network (MAN) or local area network, and terminal 102 can be mobile phone terminal, It can be PC terminal, notebook terminal or tablet computer terminal.
Fig. 2 is the flow chart of the processing method of video according to an embodiment of the present invention, is implemented below in conjunction with Fig. 2 to the present invention The processing method of video provided by example does specific introduction, as shown in Fig. 2, the processing method of the video mainly includes walking as follows It is rapid:
Step S202 obtains the key frame of target video.Before identifying video, the key of target video is first determined Frame.The key frame of target video can use time-domain information representative image algorithm (Temporal Information Representative Image, referred to as TIRI) it extracts.Specific extracting method is described in detail later.
Step S204 extracts the target code sequence of key frame according to expressing information of the key frame on multi-scale transform domain Column, wherein target coding sequence for indicating image sequence of the target video based on space-time characterisation, using target coding sequence as The video finger print of target video.
In the present embodiment, next using wavelet transformation (Stationary Wavelet Tranform, abbreviation SWT) is stablized Obtain target coding sequence.In general, if a common image is deteriorated, the linear relationship on low scale can be protected It stays, but can be disturbed in high yardstick, especially fine scale.Based on this, SWT has the characteristic of multiscale analysis, and And SWT can overcome the TIME SHIFT INVARIANCE of wavelet transform (Discrete Wave Transform, abbreviation DWT).Specifically Ground, TIME SHIFT INVARIANCE use the filter design algorithm without down-sampling.
Fig. 3 shows the realization process of SWT.As shown in figure 3, first carrying out point of two channels without down-sampling to the image of input Solution, then the recursive image by after decomposition decomposes again, resolves into the image of low pass and the image of high pass.According to following Under conditions of can be relatively easy to and obtain above-mentioned decomposition:
H0(z)G0(z)+H1(z)G1(z)=1, wherein H0(z) and H1It (z) is low-pass filter and high-pass filter respectively Transforming function transformation function.
Fig. 4 shows the test image of 4 scales of SWT.Wherein, image S4 is the high pass in first time decomposition process Image, image S1 are the low-pass pictures in last time is decomposed.Substantially, S1 scale to S4 scale divides table to represent different frequencies The information (low frequency) of the low scale of section and the more details information (high frequency) of high yardstick.
It is illustrated below in conjunction with extraction target coding sequence of the Fig. 5 to the present embodiment.
Firstly, the image f (m, n) to input carries out SWT decomposition, low-pass pictures and high-pass image are obtained.Wherein, low pass figure As using H0(z) it expresses, high-pass image H1(z) it expresses.The image f (m, n) of input is the key frame in the present embodiment, the pass Key frame uses the expression matrix of N × N.
Then, to the low-pass pictures H obtained after decomposition0(z) it is decomposed again, is similarly obtained high-pass image H1(z2) and Low-pass pictures H0(z2).To low-pass pictures H0(z2) decomposed again, obtain high-pass image H1(z4) and low-pass pictures H0(z4)。
After completing above-mentioned decomposition three times, the low-pass pictures H on S1 scale is obtained0(z4), the high-pass image on S2 scale H1(z4), the high-pass image H on S3 scale1(z2) and S4 scale on high-pass image H1(z)。
To the figure expressed on the image (the i.e. second decomposition image) and S1 scale expressed on the S2 scale in above-mentioned 4 scales As (i.e. first decomposes image) carries out down-sampling.
That is, the target coding sequence for extracting key frame includes: to be decomposed to key frame using stablizing wavelet transformation, obtain The the first decomposition image expressed on S1 scale and the second decomposition image expressed on S2 scale;Image is decomposed to first to carry out Coded treatment obtains the first coded sequence, carries out coded treatment to the second decomposition image and obtains the second coded sequence;Connection first Coded sequence and the second coded sequence, obtain target coding sequence.
That is, carrying out coded treatment to the first decomposition image obtains the first coded sequence, the second decomposition image is encoded It includes: to carry out down-sampling to the first decomposition image to handle to obtain the figure indicated using the first matrix that processing, which obtains the second coded sequence, Picture is carried out down-sampling to the second decomposition image and handles to obtain the image indicated using the second matrix;Hash is carried out to the first matrix Coding obtains the first coded sequence, carries out Hash to the second matrix and encodes to obtain the second coded sequence.
Specifically, during being decomposed, it is always N that image and the matrix of the second decomposition image are decomposed in expression first Expression second on S2 scale is decomposed expression first on the matrix and S1 scale of image and decomposes image by × N after down-sampling Matrix become the matrix of smaller M × K.Specifically, the image on S1 scale is become into M1×K1Matrix (i.e. the first matrix) into Row expression, becomes M for the image on S2 scale2×K2Matrix (i.e. the second matrix) is expressed.M under normal conditions1With K1It is equal Or it differs, M2With K2It is equal or unequal.M is used in the present embodiment1=K1=M2=K2=M.
Hash coding is carried out to the image after down-sampling.It encodes to obtain the first code sequence that is, carrying out Hash to the first matrix Column include: the intermediate value for obtaining the first matrix;Judge the difference of each element and intermediate value in the first matrix;By difference in the first matrix The value of element corresponding to greater than 0 is set as 1, by difference in the first matrix less than 0 corresponding to the value of element be set as 0;Successively connect Two adjacent elements are connect, the first coded sequence is obtained, wherein the combination that the first coded sequence is 0,1.
Specifically, the mode phase that the Hash of the Hash coding mode of the image on S1 scale and the image on S2 scale encodes Together.It is illustrated by taking the image on S1 scale as an example.
Obtain the first matrix M1×K1Intermediate value.Using this intermediate value as threshold values, and use M1×K1In each element and this A threshold values is compared, those element values are greater than with the element of threshold values, the value of these elements is replaced with 1;For those members Element value is less than the element of threshold values, and the value of these elements is replaced with 0.By above-mentioned comparison procedure, by M1×K1It is transformed to by 0 and 1 The matrix of composition.Due in the present embodiment, using M1=K1=M2=K2=M, therefore, obtained matrix are the matrix of M × M.
Same transformation is done to the image on S2 scale, details are not described herein again.
The matrix of M × M is subjected to snake scan, as shown in figure 5, according to the member in the direction scan matrix of the arrow of Fig. 5 Matrix is become one-dimensional sequence by element.Snake scan is carried out to the matrix of M × M on S1 scale, obtains the first coded sequence; Snake scan is carried out to the matrix of M × M on S2 scale, obtains the second coded sequence.First coded sequence and second are encoded Sequence links together, and has just obtained target coding sequence.The video that the target coding sequence can serve as target video refers to Line.The mode of connection can be the first coded sequence and the second coded sequence joins end to end.
During selection parameter, the length that Hash encodes on S1 scale should be greater than 31 bits, because, in the length Relatively low rate of false alarm (false positive rate) can be obtained down.Based on this, above-mentioned input picture (key frame) N=128 in matrix expresses M=6 in the first matrix and the second matrix of image after down-sampling on S1 scale and S2 scale.Therefore, The length of first coded sequence and the second coded sequence is 36 bits, and therefore, the length of target coding sequence is 72 bits.
Step S206 is compared using video finger print with the video finger print of video in reference video library.
Step S208, if it is consistent with the video finger print of the first video in reference video library to compare out video finger print, really It makes target video and the first video is identical video.
Video in reference video library also uses the above method to obtain video finger print, since the video finger print is that video is becoming The characteristic in domain is changed, is not influenced by the color of picture, therefore, the video of black and white can also obtain the video of the key frame of the video Fingerprint allows acquisition video finger print accurately to express target video, and it is inaccurate to solve expression target video in the prior art True problem has achieved the effect that accurate expression target video.Also, expand the model for obtaining the video file of video finger print It encloses, aforesaid way is widely applied in various video.
After getting video finger print, it can use video finger print and judge whether target video is the video copied, may be used also It can also be applied with information, video finger prints such as titles by determining target video afterwards compared with the video in reference video library It is detecting, other than video identification in addition to copyright, can also quote and other scenes such as sort out in video.It is detected individually below in copyright It is illustrated under two scenes of video identification.In following scene, video finger print is all obtained using aforesaid way.
Scene one:
After determining target video and the first video for identical video, the first instruction information is issued, wherein first Indicate that information is used to indicate the copied files that target video is the first video.
Non-copied video is stored in reference video library, is comparing some video in target video and reference video library When video finger print having the same, determine the target video for copy video.Since the video in reference video library is non-copies Shellfish video, and these videos can be played out by video playing application program, but video playing application journey cannot be passed through Sequence is downloaded, and therefore, having the target video of same video fingerprint with the video in the reference video library is exactly to copy video.
It may be implemented to be illustrated the copyright protection of video file below in conjunction with Fig. 6 based on this.
S601 stores a large amount of legal videos protected by copyright in reference video library.
S602, each video in reference video library have video finger print.Video finger print in reference video library mentions It takes the step of method is with said extracted video finger print identical, repeats no more.
S603 carries out Hash coding to video finger print.
S604, by Hash code storage into database.
S605 obtains the video clip of target video, which can be above-mentioned key frame.
S606 extracts the video finger print of video clip.
S607 carries out Hash coding to the video finger print of video clip.
S608 searches for the video of video in the reference video library to match with the video finger print of video clip in the database Fingerprint.
S609 determines that video belonging to the video clip is not have copyrighted copy video after being matched to video finger print.
In system shown in Fig. 6, using TIRI (Temporal Informative Representative Image) It obtains video clip, that is, obtains the key frame of target video, the time domain specification of the available video of TIRI utilizes the time domain special Property generate video finger print.
Therefore, need for video image to be divided into multiple portions during pre-processing, each part has S figure Picture.For one edition, TIRI is obtained by calculating the average weight value of this S image.Substantially, TIRI is a blurred picture, The image includes motion information that may be present in video sequence.The process for generating TIRI can use following formula:
Wherein, p(x,y,i)It is brightness value of i-th of image on (x, y) th axis in S image, w is a weighted factor, It can be constant, linear number and index.Experimental data shows that index can have the ability for preferably obtaining motion information.Cause This, (Content-based Copy Detection System, the copy based on content detect system to CBCD shown in Fig. 6 System) in, index uses wii, and β=0.65.In CBCD system shown in Fig. 6, this implementation is used when obtaining video finger print The acquisition methods for the video finger print that example provides obtain.
In the present embodiment, healthy and strong CBCD system reaches the balance of precision (resolution ratio) and recall rate (robustness).This Embodiment uses FλThe performance of CBCD system is measured as an overall target, wherein FλIt is defined as:
Wherein, λ is the combined weight of precision and recall rate, and p indicates precision, and r indicates recall rate.In the present embodiment, The balance of precision and recall rate can be obtained when λ takes 1.
Further, it detects and the copy of positioning video segment is two major functions of CBCD system.What is detected is main Purpose is to detect the reference video segment of any duplication, and the main purpose of positioning is the positioning duplication matched video of section.CBCD system The detection error rate of system is lower than 0.01%.
The accuracy rate of usually positioning copy video clip can reflect the detection performance of system, and how many can be determined Location Request is accurately positioned, and the Average Accuracy of CBCD system is 98% or so in the present embodiment.
Scene two:
After determining target video and the first video for identical video, the first video in reference video library is extracted The broadcast path of video name and the first video;Pushing video title and broadcast path.
Pushing video title and the mode of broadcast path can apply instant communication applications client, videoconference client and Net cast client etc., by shooting during one section of video (target video) uploads onto the server, mesh that server by utilizing uploads Mark video searches video in reference video library, if finding some video in target video and reference video is identical view Frequently, the title and broadcast path of the first video in reference video can be pushed.
By video playing client or browser client etc. can play the application client of video come by Complete video corresponding to target video is played according to above-mentioned broadcast path.In this way, seeing the segment of a video still in user When being not aware that the title and broadcast path of the video, complete video corresponding to the segment of the video can be played.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much In the case of the former be more preferably embodiment.Based on this understanding, technical solution of the present invention is substantially in other words to existing The part that technology contributes can be embodied in the form of software products, which is stored in a storage In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate Machine, server or network equipment etc.) execute method described in each embodiment of the present invention.
Embodiment 2
According to embodiments of the present invention, a kind of processing dress for implementing the video of the processing method of above-mentioned video is additionally provided It sets, the processing unit of the video is mainly used for executing the processing method of video provided by above content of the embodiment of the present invention, with Under be provided for the embodiments of the invention the processing unit of video and do specific introduction:
Fig. 7 is the schematic diagram of the processing unit of video according to an embodiment of the present invention, as shown in fig. 7, the processing of the video Device specifically includes that
Acquiring unit 10 is used to obtain the key frame of target video.Before identifying video, target video is first determined Key frame.The key frame of target video can use time-domain information representative image algorithm (Temporal Information Representative Image, referred to as TIRI) it extracts.
First extraction unit 20 is used for the mesh that the expressing information according to key frame on multi-scale transform domain extracts key frame Mark coded sequence, wherein target coding sequence is for indicating image sequence of the target video based on space-time characterisation, by target code Video finger print of the sequence as target video.
In the present embodiment, next using wavelet transformation (Stationary Wavelet Tranform, abbreviation SWT) is stablized Obtain target coding sequence.In general, if a common image is deteriorated, the linear relationship on low scale can be protected It stays, but can be disturbed in high yardstick, especially high-precision high yardstick.Based on this, SWT has the characteristic of multiscale analysis, and And SWT can overcome the TIME SHIFT INVARIANCE of wavelet transform (Discrete Wave Transform, abbreviation DWT).Specifically Ground, TIME SHIFT INVARIANCE use the filter design algorithm without down-sampling.
Fig. 3 shows the realization process of SWT.As shown in figure 3, first carrying out point of two channels without down-sampling to the image of input Solution, then the recursive image by after decomposition decomposes again, resolves into the image of low pass and the image of high pass.According to following Under conditions of can be relatively easy to and obtain above-mentioned decomposition:
H0(z)G0(z)+H1(z)G1(z)=1, wherein H0(z) and H1It (z) is low-pass filter and high-pass filter respectively Transforming function transformation function.
Fig. 4 shows the test image of 4 scales of SWT.Wherein, image S4 is the high pass in first time decomposition process Image, image S1 are the low-pass pictures in last time is decomposed.Substantially, S1 scale to S4 scale divides table to represent different frequencies The information (low frequency) of the low scale of section and the more details information (high frequency) of high yardstick.
It is illustrated below in conjunction with extraction target coding sequence of the Fig. 5 to the present embodiment.
Firstly, the image f (m, n) to input carries out SWT decomposition, low-pass pictures and high-pass image are obtained.Wherein, low pass figure As using H0(z) it expresses, high-pass image H1(z) it expresses.The image f (m, n) of input is the key frame in the present embodiment, the pass Key frame uses the expression matrix of N × N.
Then, to the low-pass pictures H obtained after decomposition0(z) it is decomposed again, is similarly obtained high-pass image H1(z2) and Low-pass pictures H0(z2).To low-pass pictures H0(z2) decomposed again, obtain high-pass image H1(z4) and low-pass pictures H0(z4)。
After completing above-mentioned decomposition three times, the low-pass pictures H on S1 scale is obtained0(z4), the high-pass image on S2 scale H1(z4), the high-pass image H on S3 scale1(z2) and S4 scale on high-pass image H1(z)。
To the figure expressed on the image (the i.e. second decomposition image) and S1 scale expressed on the S2 scale in above-mentioned 4 scales As (i.e. first decomposes image) carries out down-sampling.
Optionally, extraction unit includes: decomposition subelement, for being decomposed to key frame using stablizing wavelet transformation, Second for obtaining the express on S1 scale first decomposition image and expressing on S2 scale decomposes image;Subelement is handled, is used Coded treatment is carried out in the first decomposition image and obtains the first coded sequence, and coded treatment is carried out to the second decomposition image and obtains second Coded sequence;Subelement is connected, for connecting the first coded sequence and the second coded sequence, obtains target coding sequence.
Optionally, processing subelement includes: sampling module, is adopted for handling the first decomposition image progress down-sampling The image indicated with the first matrix is carried out down-sampling to the second decomposition image and handles to obtain the image indicated using the second matrix; Coding module encodes to obtain the first coded sequence for carrying out Hash to the first matrix, carries out Hash to the second matrix and encodes To the second coded sequence.
Specifically, during being decomposed, it is always N that image and the matrix of the second decomposition image are decomposed in expression first Expression second on S2 scale is decomposed the first exploded view of upper expression on the matrix and S1 scale of image after down-sampling by × N The matrix of picture becomes the matrix of smaller M × K.Specifically, the image on S1 scale is become into M1×K1Matrix (i.e. the first matrix) It is expressed, the image on S2 scale is become into M2×K2Matrix (i.e. the second matrix) is expressed.M under normal conditions1With K1Phase Deng or differ, M2With K2It is equal or unequal.M is used in the present embodiment1=K1=M2=K2=M.
Hash coding is carried out to the image after down-sampling.That is, coding module includes: acquisition submodule, for obtaining first The intermediate value of matrix;Judging submodule, for judging the difference of each element and intermediate value in the first matrix;Submodule is set, is used for By difference in the first matrix be greater than 0 corresponding to the value of element be set as 1, by difference in the first matrix less than 0 corresponding to element Value be set as 0;It connects submodule and obtains the first coded sequence for being sequentially connected two adjacent elements, wherein first compiles The combination that code sequence is 0,1.
Specifically, the mode phase that the Hash of the Hash coding mode of the image on S1 scale and the image on S2 scale encodes Together.It is illustrated by taking the image on S1 scale as an example.
Obtain the first matrix M1×K1Intermediate value.Using this intermediate value as threshold values, and use M1×K1In each element and this Threshold values is compared, those element values are greater than with the element of threshold values, the value of these elements is replaced with 1;For those elements Value is less than the element of threshold values, and the value of these elements is replaced with 0.By above-mentioned comparison procedure, by M1×K1It is transformed to by 0 and 1 group At matrix.Due in the present embodiment, using M1=K1=M2=K2=M, therefore, obtained matrix are the matrix of M × M.
Same transformation is done to the image on S2 scale, details are not described herein again.
The matrix of M × M is subjected to snake scan, as shown in figure 5, according to the member in the direction scan matrix of the arrow of Fig. 5 Matrix is become one-dimensional sequence by element.Snake scan is carried out to the matrix of M × M on S1 scale, obtains the first coded sequence; Snake scan is carried out to the matrix of M × M on S2 scale, obtains the second coded sequence.First coded sequence and second are encoded Sequence links together, and has just obtained target coding sequence.The video that the target coding sequence can serve as target video refers to Line.The mode of connection can be the first coded sequence and the second coded sequence joins end to end.
During selection parameter, the length that Hash encodes on S1 scale should be greater than 31 bits, because, in the length Relatively low rate of false alarm (false positive rate) can be obtained down.Based on this, above-mentioned input picture (key frame) N=128 in matrix expresses M=6 in the first matrix and the second matrix of image after down-sampling on S1 scale and S2 scale.Therefore, The length of first coded sequence and the second coded sequence is 36 bits, and therefore, the length of target coding sequence is 72 bits.
Comparing unit 30 using video finger print with the video finger print of video in reference video library for being compared.
Determination unit 40 is used for compare out video finger print consistent with the video finger print of the first video in reference video library When, it determines target video and the first video is identical video.
Video in reference video library also uses the above method to obtain video finger print, since the video finger print is that video is becoming The characteristic in domain is changed, is not influenced by the color of picture, therefore, the video of black and white can also obtain the video of the key frame of the video Fingerprint allows acquisition video finger print accurately to express target video, and it is inaccurate to solve expression target video in the prior art True problem has achieved the effect that accurate expression target video.Also, expand the model for obtaining the video file of video finger print It encloses, aforesaid way is widely applied in various video.
Optionally, device further include: issue unit, for determining target video and the first video for identical video Later, the first instruction information is issued, wherein the first instruction information is used to indicate the copied files that target video is the first video. Non-copied video is stored in reference video library, is comparing target video with some video in reference video library with identical Video finger print when, determine the target video for copy video.Since the video in reference video library is non-copied video, and These videos can be played out by video playing application program, but cannot be carried out down by video playing application program It carries, therefore, having the target video of same video fingerprint with the video in the reference video library is exactly to copy video.
Optionally, device further include: the second extraction unit, for being identical determining target video with the first video After video, the video name of the first video and the broadcast path of the first video in reference video library are extracted;Push unit is used for Pushing video title and broadcast path.
Pushing video title and the mode of broadcast path can apply instant communication applications client, videoconference client and Net cast client etc., by shooting during one section of video (target video) uploads onto the server, mesh that server by utilizing uploads Mark video searches video in reference video library, if finding some video in target video and reference video is identical view Frequently, the title and broadcast path of the first video in reference video can be pushed.
By video playing client or browser client etc. can play the application client of video come by Complete video corresponding to target video is played according to above-mentioned broadcast path.In this way, seeing the segment of a video still in user When being not aware that the title and broadcast path of the video, complete video corresponding to the segment of the video can be played.
Embodiment 3
According to embodiments of the present invention, it additionally provides a kind of for implementing the server of the processing method of above-mentioned video, such as Fig. 8 Shown, which mainly includes processor 801, data-interface 803, memory 805 and network interface 807, in which:
Video clip (the target that data-interface 803 then mainly gets third party's tool in such a way that data are transmitted Video) it is transferred to processor 801.
Memory 805 is mainly used for storing the video and target video in reference video library.
Network interface 807 is mainly used for carrying out network communication with terminal, receives the reference video of terminal acquisition.
Processor 801 is mainly used for performing the following operations: obtaining the key frame of target video;According to the key frame more Expressing information on change of scale domain extracts the target coding sequence of the key frame, wherein the target coding sequence is used for Image sequence of the target video based on space-time characterisation is indicated, using the target coding sequence as the view of the target video Frequency fingerprint;It is compared using the video finger print with the video finger print of video in reference video library;If comparing out the video Fingerprint is consistent with the video finger print of the first video in the reference video library, it is determined that goes out the target video and described first Video is identical video.
Processor 801 is also used to the expressing information according to the key frame on multi-scale transform domain and extracts the key frame Target coding sequence include: to be decomposed to the key frame using stablizing wavelet transformation, obtain expressing on S1 scale First decomposition image and the second decomposition image expressed on S2 scale;Coded treatment is carried out to the first decomposition image to obtain First coded sequence carries out coded treatment to the second decomposition image and obtains the second coded sequence;Connect first coding Sequence and second coded sequence, obtain target coding sequence.
Processor 801 is also used to carry out coded treatment to the first decomposition image to obtain the first coded sequence, to described Second decomposition image carries out coded treatment and obtains the second coded sequence to include: to decompose image to described first to carry out down-sampling processing The image indicated using the first matrix is obtained, down-sampling is carried out to second matrix and handles to obtain using the expression of the second matrix Image;Hash is carried out to first matrix to encode to obtain first coded sequence, and Hash volume is carried out to second matrix Code obtains second coded sequence.
Processor 801 is also used to carry out Hash to first matrix to encode to obtain first coded sequence to include: to obtain Take the intermediate value of first matrix;Judge the difference of each element and the intermediate value in first matrix;By first square Difference described in battle array be greater than 0 corresponding to the value of the element be set as 1, will difference described in first matrix less than 0 it is right The value for the element answered is set as 0;Two adjacent elements are sequentially connected, first coded sequence is obtained, wherein described The combination that one coded sequence is 0,1.
Processor 801 is also used to after determining the target video and first video for identical video, institute State method further include: issue the first instruction information, wherein it is described that the first instruction information, which is used to indicate the target video, The copied files of first video.
Processor 801 is also used to after determining the target video and first video for identical video, institute State method further include: extract the video name of the first video described in the reference video library and the broadcasting road of first video Diameter;Push the video name and the broadcast path.
Optionally, the specific example in the present embodiment can be shown with reference to described in above-described embodiment 1 and embodiment 2 Example, details are not described herein for the present embodiment.
Embodiment 4
The embodiments of the present invention also provide a kind of storage mediums.Optionally, in the present embodiment, above-mentioned storage medium can The program code of the processing method of the video of the embodiment of the present invention for storage.
Optionally, in the present embodiment, above-mentioned storage medium can be located at mobile communications network, wide area network, Metropolitan Area Network (MAN) or At least one network equipment in multiple network equipments in the network of local area network.
Optionally, in the present embodiment, storage medium is arranged to store the program code for executing following steps:
S1 obtains the key frame of target video;
S2 extracts the target code sequence of the key frame according to expressing information of the key frame on multi-scale transform domain Column;
S3 is compared using the video finger print with the video finger print of video in reference video library;
S4, if it is consistent with the video finger print of the first video in the reference video library to compare out the video finger print, It determines the target video and first video is identical video.
Optionally, in the present embodiment, above-mentioned storage medium can include but is not limited to: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or The various media that can store program code such as CD.
Optionally, in the present embodiment, processor is executed according to according to program code stored in storage medium The target coding sequence that expressing information of the key frame on multi-scale transform domain extracts the key frame includes: to utilize to stablize small echo The key frame is decomposed in transformation, for obtaining the express on S1 scale first decomposition image and expressing on S2 scale Two decompose image;Coded treatment is carried out to the first decomposition image and obtains the first coded sequence, decomposes image to described second It carries out coded treatment and obtains the second coded sequence;First coded sequence and second coded sequence are connected, target is obtained Coded sequence.
Optionally, in the present embodiment, processor is executed according to program code stored in storage medium to described the One decomposition image is carried out down-sampling and handles to obtain the image indicated using the first matrix, is carried out at down-sampling to second matrix Reason obtains the image indicated using the second matrix;Hash is carried out to first matrix to encode to obtain first coded sequence, Hash is carried out to second matrix to encode to obtain second coded sequence.
Optionally, in the present embodiment, processor is according to program code stored in storage medium execution acquisition The intermediate value of first matrix;Judge the difference of each element and the intermediate value in first matrix;By institute in first matrix The value that difference is stated greater than the element corresponding to 0 is set as 1, by difference described in first matrix less than 0 corresponding to institute The value for stating element is set as 0;Two adjacent elements are sequentially connected, first coded sequence is obtained, wherein first coding The combination that sequence is 0,1.
Optionally, in the present embodiment, processor executes according to program code stored in storage medium and issues first Indicate information, wherein the first instruction information is used to indicate the copied files that the target video is first video.
Optionally, in the present embodiment, processor is being determined according to program code stored in storage medium execution After the target video and first video are identical video, the method also includes: extract the reference video library Described in the video name of the first video and the broadcast path of first video;Push the video name and the broadcasting road Diameter.
Optionally, the specific example in the present embodiment can be shown with reference to described in above-described embodiment 1 and embodiment 2 Example, details are not described herein for the present embodiment.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
If the integrated unit in above-described embodiment is realized in the form of SFU software functional unit and as independent product When selling or using, it can store in above-mentioned computer-readable storage medium.Based on this understanding, skill of the invention Substantially all or part of the part that contributes to existing technology or the technical solution can be with soft in other words for art scheme The form of part product embodies, which is stored in a storage medium, including some instructions are used so that one Platform or multiple stage computers equipment (can be personal computer, server or network equipment etc.) execute each embodiment institute of the present invention State all or part of the steps of method.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed client, it can be by others side Formula is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, and only one Kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or It is desirably integrated into another system, or some features can be ignored or not executed.Another point, it is shown or discussed it is mutual it Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (10)

1. a kind of processing method of video characterized by comprising
Obtain the key frame of target video;
The target coding sequence of the key frame is extracted according to expressing information of the key frame on multi-scale transform domain, In, the target coding sequence is for indicating image sequence of the target video based on space-time characterisation, by the target code Video finger print of the sequence as the target video;
It is compared using the video finger print with the video finger print of video in reference video library;
If it is consistent with the video finger print of the first video in the reference video library to compare out the video finger print, it is determined that go out institute It states target video and first video is identical video;
Wherein, the target coding sequence of the key frame is extracted according to expressing information of the key frame on multi-scale transform domain Include: to be decomposed to the key frame using stablizing wavelet transformation, obtain the expressed on S1 scale first decomposition image and Second expressed on S2 scale decomposes image;Coded treatment is carried out to the first decomposition image and obtains the first coded sequence, Coded treatment is carried out to the second decomposition image and obtains the second coded sequence;Connect first coded sequence and described second Coded sequence obtains target coding sequence.
2. the method according to claim 1, wherein carrying out coded treatment to the first decomposition image obtains the One coded sequence, obtaining the second coded sequence to the second decomposition image progress coded treatment includes:
Down-sampling is carried out to the first decomposition image to handle to obtain the image indicated using the first matrix, is decomposed to described second Image is carried out down-sampling and handles to obtain the image indicated using the second matrix;
Hash is carried out to first matrix to encode to obtain first coded sequence, and Hash coding is carried out to second matrix Obtain second coded sequence.
3. according to the method described in claim 2, it is characterized in that, carrying out Hash to first matrix encodes to obtain described the One coded sequence includes:
Obtain the intermediate value of first matrix;
Judge the difference of each element and the intermediate value in first matrix;
By difference described in first matrix be greater than 0 corresponding to the value of the element be set as 1, by institute in first matrix State difference less than 0 corresponding to the value of the element be set as 0;
Two adjacent elements are sequentially connected, first coded sequence is obtained, wherein first coded sequence is 0,1 Combination.
4. the method according to claim 1, wherein determining that the target video is with first video After identical video, the method also includes:
Issue the first instruction information, wherein it is first video that the first instruction information, which is used to indicate the target video, Copied files.
5. the method according to claim 1, wherein determining that the target video is with first video After identical video, the method also includes:
Extract the video name of the first video described in the reference video library and the broadcast path of first video;
Push the video name and the broadcast path.
6. a kind of processing unit of video characterized by comprising
Acquiring unit, for obtaining the key frame of target video;
First extraction unit, for extracting the key frame according to expressing information of the key frame on multi-scale transform domain Target coding sequence, wherein the target coding sequence for indicating image sequence of the target video based on space-time characterisation, Using the target coding sequence as the video finger print of the target video;
Comparing unit, for being compared using the video finger print with the video finger print of video in reference video library;
Determination unit, in the video finger print one for comparing out the first video in the video finger print and the reference video library When cause, determines the target video and first video is identical video;
Wherein, the extraction unit includes: decomposition subelement, for being divided the key frame using stablizing wavelet transformation Solution, second for obtaining the express on S1 scale first decomposition image and expressing on S2 scale decompose image;Subelement is handled, The first coded sequence is obtained for carrying out coded treatment to the first decomposition image, the second decomposition image is encoded Processing obtains the second coded sequence;Connection subelement is obtained for connecting first coded sequence and second coded sequence To target coding sequence.
7. device according to claim 6, which is characterized in that the processing subelement includes:
Sampling module handles to obtain the image indicated using the first matrix for carrying out down-sampling to the first decomposition image, Down-sampling is carried out to the second decomposition image to handle to obtain the image indicated using the second matrix;
Coding module encodes to obtain first coded sequence, to second square for carrying out Hash to first matrix Battle array carries out Hash and encodes to obtain second coded sequence.
8. device according to claim 7, which is characterized in that the coding module includes:
Acquisition submodule, for obtaining the intermediate value of first matrix;
Judging submodule, for judging the difference of each element and the intermediate value in first matrix;
Submodule is set, and the value for the element corresponding to difference described in first matrix is greater than 0 is set as 1, will Difference described in first matrix less than 0 corresponding to the value of the element be set as 0;
It connects submodule and obtains first coded sequence, wherein described first for being sequentially connected two adjacent elements The combination that coded sequence is 0,1.
9. device according to claim 6, which is characterized in that described device further include:
Issue unit, for issuing first after determining the target video and first video for identical video Indicate information, wherein the first instruction information is used to indicate the copied files that the target video is first video.
10. device according to claim 6, which is characterized in that described device further include:
Second extraction unit, for extracting after determining the target video and first video for identical video The broadcast path of the video name of first video described in the reference video library and first video;
Push unit, for pushing the video name and the broadcast path.
CN201610682592.9A 2016-08-17 2016-08-17 The treating method and apparatus of video Active CN106231356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610682592.9A CN106231356B (en) 2016-08-17 2016-08-17 The treating method and apparatus of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610682592.9A CN106231356B (en) 2016-08-17 2016-08-17 The treating method and apparatus of video

Publications (2)

Publication Number Publication Date
CN106231356A CN106231356A (en) 2016-12-14
CN106231356B true CN106231356B (en) 2019-01-08

Family

ID=57553339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610682592.9A Active CN106231356B (en) 2016-08-17 2016-08-17 The treating method and apparatus of video

Country Status (1)

Country Link
CN (1) CN106231356B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989856B (en) * 2018-06-19 2021-05-18 康佳集团股份有限公司 Processing method, terminal and medium for acquiring positive film associated data based on short video
CN109255777B (en) * 2018-07-27 2021-10-22 昆明理工大学 Image similarity calculation method combining wavelet transformation and perceptual hash algorithm
CN109788309B (en) * 2018-12-25 2021-05-04 陕西优米数据技术有限公司 Video file piracy detection method and system based on block chain technology
CN109857907B (en) * 2019-02-25 2021-11-30 百度在线网络技术(北京)有限公司 Video positioning method and device
CN110222594B (en) * 2019-05-20 2021-11-16 厦门能见易判信息科技有限公司 Pirated video identification method and system
CN112203115B (en) * 2020-10-10 2023-03-10 腾讯科技(深圳)有限公司 Video identification method and related device
CN112104870B (en) * 2020-11-17 2021-04-06 南京世泽科技有限公司 Method and system for improving security of ultra-low time delay encoder

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101442641A (en) * 2008-11-21 2009-05-27 清华大学 Method and system for monitoring video copy based on content
CN101855635A (en) * 2007-10-05 2010-10-06 杜比实验室特许公司 Media fingerprints that reliably correspond to media content
CN103226571A (en) * 2013-03-26 2013-07-31 天脉聚源(北京)传媒科技有限公司 Method and device for detecting repeatability of advertisement library
CN103279473A (en) * 2013-04-10 2013-09-04 深圳康佳通信科技有限公司 Method, system and mobile terminal for searching massive amounts of video content
CN104142984A (en) * 2014-07-18 2014-11-12 电子科技大学 Video fingerprint retrieval method based on coarse and fine granularity
CN104504121A (en) * 2014-12-29 2015-04-08 北京奇艺世纪科技有限公司 Video retrieval method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8498487B2 (en) * 2008-08-20 2013-07-30 Sri International Content-based matching of videos using local spatio-temporal fingerprints

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101855635A (en) * 2007-10-05 2010-10-06 杜比实验室特许公司 Media fingerprints that reliably correspond to media content
CN101442641A (en) * 2008-11-21 2009-05-27 清华大学 Method and system for monitoring video copy based on content
CN103226571A (en) * 2013-03-26 2013-07-31 天脉聚源(北京)传媒科技有限公司 Method and device for detecting repeatability of advertisement library
CN103279473A (en) * 2013-04-10 2013-09-04 深圳康佳通信科技有限公司 Method, system and mobile terminal for searching massive amounts of video content
CN104142984A (en) * 2014-07-18 2014-11-12 电子科技大学 Video fingerprint retrieval method based on coarse and fine granularity
CN104504121A (en) * 2014-12-29 2015-04-08 北京奇艺世纪科技有限公司 Video retrieval method and device

Also Published As

Publication number Publication date
CN106231356A (en) 2016-12-14

Similar Documents

Publication Publication Date Title
CN106231356B (en) The treating method and apparatus of video
Zampoglou et al. Large-scale evaluation of splicing localization algorithms for web images
EP2638701B1 (en) Vector transformation for indexing, similarity search and classification
Mullan et al. Forensic source identification using JPEG image headers: The case of smartphones
Kwon et al. Learning jpeg compression artifacts for image manipulation detection and localization
Tang et al. Lexicographical framework for image hashing with implementation based on DCT and NMF
EP1494132B1 (en) Method and apparatus for representing a group of images
US20080159403A1 (en) System for Use of Complexity of Audio, Image and Video as Perceived by a Human Observer
CN110532413B (en) Information retrieval method and device based on picture matching and computer equipment
Singh et al. Detection of frame duplication type of forgery in digital video using sub-block based features
Xie et al. Bag-of-words feature representation for blind image quality assessment with local quantized pattern
CN115443490A (en) Image auditing method and device, equipment and storage medium
CN110505513A (en) A kind of video interception method, apparatus, electronic equipment and storage medium
JP5634075B2 (en) Method and apparatus for processing a sequence of images, apparatus for processing image data, and computer program product
Peng et al. Detection of double JPEG compression with the same quantization matrix based on convolutional neural networks
GB2454213A (en) Analyzing a Plurality of Stored Images to Allow Searching
Mussarat et al. Content based image retrieval using combined features of shape, color and relevance feedback
Ghodhbani et al. Depth-based color stereo images retrieval using joint multivariate statistical models
EP2465056B1 (en) Method, system and controller for searching a database
Cirakman et al. Content-based copy detection by a subspace learning based video fingerprinting scheme
Liu et al. Perceptual image hashing based on Canny operator and tensor for copy-move forgery detection
CN111143619B (en) Video fingerprint generation method, search method, electronic device and medium
Asaad et al. Topological image texture analysis for quality assessment
RUIkAR et al. Copy move image forgery detection using SIFT
Das et al. Image splicing detection using feature based machine learning methods and deep learning mechanisms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant