CN101872415A - Video copying detection method being suitable for IPTV - Google Patents

Video copying detection method being suitable for IPTV Download PDF

Info

Publication number
CN101872415A
CN101872415A CN 201010166974 CN201010166974A CN101872415A CN 101872415 A CN101872415 A CN 101872415A CN 201010166974 CN201010166974 CN 201010166974 CN 201010166974 A CN201010166974 A CN 201010166974A CN 101872415 A CN101872415 A CN 101872415A
Authority
CN
China
Prior art keywords
picture
video
camera lens
frame
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010166974
Other languages
Chinese (zh)
Inventor
金城
潘岩
路红
薛向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN 201010166974 priority Critical patent/CN101872415A/en
Publication of CN101872415A publication Critical patent/CN101872415A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of video images, in particular to a video copying detection method being suitable for IPTV, comprising the following steps: detecting 'picture in picture' of a video; dividing and gathering the video lens; calculating video frame similarity; and constructing video lens index by utilizing a K-means tree and the like. The method of the invention realizes rapid and efficient video copying detection by utilizing a concise and effective video frame similarity processing method and utilizing a K-means tree to construct lens index; and the method for performing the lens detection is not only rapid, but also has good detection accuracy and coverage rate.

Description

Be applicable to the video copying detection method of IPTV
Technical field
The invention belongs to the video image technical field, be specifically related to a kind of video copying detection method.
Background technology
Along with the fast development of Internet and wireless network, the digital video information of magnanimity begins to enter our life.The thing followed is the redundant and digital property right problem of serious data more and more, in recent years, the appearance of content-based copy detection (CBCD, Content-based Copy Detection) technology provides a method that can replace the identification video and the image copy of digital watermark technology.
The copy detection technology that has occurred is varied, and these methods have their advantage and deficiencies separately when handling different copying data types.In modern digital TV applications such as IPTV (Internet Protocol TV) system, the video copy problem has its exclusive characteristics.
Though in TV applications, all video copy types may occur, statistics shows, the actual the highest several copying data types of proportion that occur are respectively: size scaling, luminance transformation, the sharpness that the video code change causes changes, and manually adds picture and text.Wherein the size scaling conversion is normally gone being embedded in the new image behind the original image convergent-divergent, produces so-called " picture-in-picture " type of copy.In addition, in IPTV used, all can produce a large amount of new datas every day, therefore requires detection technique should have very high detection efficiency.In existing detection method, use the copy detection technique computes efficient height of global characteristics, can handle preferably that sharpness changes and the artificial picture and text problem of adding, also can the sensed luminance conversion after suitable processing, but it is often powerless to " picture-in-picture " problem; Though and the detection technique of use local feature can be handled " picture-in-picture " problem, luminance transformation and the artificial picture and text that add also had good adaptability, but its computational complexity height, testing result are subject to the influence that video definition changes and instability also is difficult to adapt to the requirement of IPTV application.We have proposed a cover fast and effectively detection method and flow process at the characteristics that IPTV uses.
Summary of the invention
The objective of the invention is to propose a kind of video copying detection method rapidly and efficiently, and make up a complete available video copy detection system.
Video copying detection method provided by the invention comprises: video " picture-in-picture " detects; Video lens is divided and cluster; The frame of video similarity is calculated; Utilize the K-means tree to make up the video lens index, etc.
Its testing process comprises following key step:
A for the video that detects in the database, at first carries out " picture-in-picture " and detects, and carries out camera lens then and divides and cluster, and cluster result is utilized K-means tree index building;
B when new video need detect, at first carries out " picture-in-picture " to it and detects, and carries out camera lens then and divides and cluster, searches in the database index that utilizes cluster result to make up in the step a).
C, the output lookup result.
Described video " picture-in-picture " detection method is by seeking those continual and steady appearance, and has the I picture edge of " picture-in-picture " characteristics, obtains the border and the position of " picture-in-picture ".Concrete steps comprise:
A carries out rim detection to detecting each frame of video;
B, video is carried out windowing process: in length was the forms of L frame, (r the is the threshold value factor) edge that the frequency of occurrences is surpassed L*r was designated as candidate's " picture-in-picture " edge;
C, the shape facility at edge and the content change amplitude of inside, edge are selected in astronomical observation, if the edge content amplitude of variation of inside, candidate edge is too small, can think that then this edge is the normal frame shape part that occurs in the former video, and eliminating is possible of " picture-in-picture ".
Described video lens is divided and clustering method, the steps include:
A utilizes the similarity between the specific frame similarity calculating method calculating successive frame;
B screens detecting camera lens, discards the very few false camera lens of those frame numbers, thereby avoids the influence of conversion special efficacy between the camera lens;
C in order to improve detection efficiency, can adopt the frame-skipping detection method, promptly detects the method for a frame similarity every the number frame.
D to each camera lens that extracts, calculates one or more camera lens representative frame, and utilizes similarity between the representative frame to calculate similarity between the camera lens.
Described specific frame similarity calculating method the steps include:
A, a certain frame of video is carried out after gray processing and the histogram equalization, be divided into the impartial subgraph of size of N*N, each subgraph is calculated its 16 rank grey level histogram, with result of calculation normalization, then each frame of video can be expressed as N*N 16 dimension normalized vectors.
Each frame is expressed as:
I i = { v i ( 1 ) , v i ( 2 ) , v i ( 3 ) , . . . , v i ( N * N ) } - - - ( 1 )
Each subgraph is expressed as:
v ( k ) = < v 1 ( k ) , v 2 ( k ) , v 3 ( k ) , . . . v 16 ( k ) > - - - ( 2 )
Similarity calculating method between two frames is:
s i , j k = v i ( k ) . v j ( k ) | | v i ( k ) | | | | v j ( k ) | | - - - ( 3 )
s i , j = &Sigma; k = 1 N * N ( s i , j k ) 2 / N 2 - - - ( 4 )
Described camera lens representative frame extracting method the steps include:
Obtain representative frame by a weighted mean calculating all frames in the camera lens:
A, each camera lens can be expressed as:
S i={I 1,I 2,I 3,...,I K} (5)
The computing method of representative frame are:
I i 2 = &Sigma; n = 1 k ( &omega; n I n ) / &Sigma; n = 1 k &omega; n - - - ( 6
B, weighted value is by normal distribution Obtain.
Described shot index construction method the steps include:
A) by calculating similarity between the camera lens representative frame, utilize shot similarity that camera lens is carried out cluster, obtain a plurality of camera lenses bunch as the similarity between the camera lens;
B) utilize K-means tree to organize all camera lenses bunch; When making up the K-means index tree, need to calculate the similarity between the camera lens bunch, computing method are:
Each camera lens bunch can be expressed as:
SC i = { S i ( 1 ) , S i ( 2 ) , S i ( 3 ) , . . . S i ( M ) } - - - ( 8 )
The similarity of camera lens bunch is:
SCS i , j = max 0 < m < N , 0 < n < N { SS i , j ( m , n ) } - - - ( 9 )
The frame of video calculation of similarity degree has been utilized the thought of blocked histogram, single video frame images is converted into a plurality of 16 dimensional vectors, and comes similarity between representative frame by the distance between the compute vector collection.The sharpness of using method that blocked histogram calculates the frame similarity to cause to the size scaling conversion with because of coding changes and has good robustness; Because the artificial picture and text that add can not cover the major part of original image usually, so the blocked histogram method also can be handled such problem; By two field picture is carried out gray processing and histogram equalization, can reduce brightness effectively and change the influence that causes.Therefore the blocked histogram method be a kind of can be well suited in the frame similarity calculating method of IPTV system.
Video " picture-in-picture " detection method at the characteristics of " picture-in-picture " among the IPTV, by the image border is detected and handled, thereby obtains the edge and the positional information of " picture-in-picture ".The basic thought that " picture-in-picture " detects is to seek those continual and steady appearance, and the I picture edge with " picture-in-picture " characteristics.In actual TV was used, the appearance of " picture-in-picture " had following characteristics usually: certain time, and position stability, profile mostly is rectangle.For detecting these features, earlier the target video frame is carried out rim detection, then video is carried out windowing process: in length is the forms of L frame, (r the is the threshold value factor) edge that the frequency of occurrences is surpassed L*r is designated as candidate's " picture-in-picture " edge, and the shape facility at detection candidate edge and the content change amplitude of inside, edge, if the edge content amplitude of variation of inside, candidate edge is too small, can think that then this edge is the normal frame shape part that occurs in the former video, eliminating is possible of " picture-in-picture ".
The camera lens of video is divided and cluster, carries out camera lens by the variation that detects the similarity between the video consecutive frame and divides, and calculate a representative frame from camera lens, utilizes the similarity of representative frame to obtain similarity between the camera lens again, carries out the cluster between the camera lens.Video is carried out camera lens divide, seek shot boundary by using top frame similarity calculating method.The accumulation frame difference influence that causes for fear of full length shot detects effect, and an accumulation frame difference limen value should be set in the similarity between control frame.Notice in actual video, usually use the gradual change frame as transition between camera lens and the camera lens, this can cause and occur the very few false camera lens of frame number in the testing result, therefore can discard the very few camera lens of those frame numbers.In addition in order to improve the efficient of detection, the method that can use frame-skipping to detect promptly every number frame similarity relatively, is carried out cluster after also may reducing and the accuracy when searching for but so do.
In order to realize searching lens fast, bunch utilize a K-means tree to organize on the camera lens of all videos in the database, make up a shot index.The implementation method of K-means tree is: constantly camera lens bunch is carried out K-means cluster process iteratively, the quantity of knowing each subclass reduces to and is lower than a certain threshold value, or the similarity between each camera lens of subclass inside bunch reaches a certain threshold value.The set that whole cluster process forms is divided with k fork tree representation promptly last K-means tree.In this tree, the similarity of " genetic connection " intimate camera lens bunch is just high more.By index building tree, can compare the relative position of two camera lenses bunch in index tree and replace directly calculating similarity between two camera lenses bunch.When the camera lens of new video bunch is carried out copy detection, because each similarity that only needs the individual K children under newer bunch and the same node when index tree is searched for, the complicacy of search is reduced to Θ (log (M)), and M is the number of all camera lenses bunch in the database.
By using the K-means tree, can greatly accelerate video frequency searching speed.
Description of drawings
Fig. 1 is the testing process figure of system
Embodiment
Fig. 1 carries out the process flow diagram of video copy detection for this system.
System detects the step of carrying out:
1, the database video makes up.In order to detect certain video is to have the video copy detection content, need to make up a sample video library,, in this sample video library, seek the sample that satisfies the video copy standard the target video that needs detect, if find, think that then there is the video copy content in target video.
2, the process database video.After sample video library structure is finished, each video that wherein comprises to be handled, processing procedure comprises:
1) video " picture-in-picture " detects.The process that " picture-in-picture " detects can be finished by reading video one around reason.Concrete steps are:
A) read the present frame of video, it is carried out rim detection.
B) preseting length is the forms of L frame, and the concrete numerical value of L is decided by concrete video length and realization condition.Calculate the statistics outline map of this forms correspondence: the statistics marginal point that the frequency of occurrences is higher than threshold value r (r is obtained by experiment experience) in the L frame that forms comprise, remove the marginal point that all frequencies of occurrences are lower than r.In the statistics outline map, carry out the rectangle frame rim detection, detect the rectangular edges frame that obtains and be candidate's " picture-in-picture " edge.
C) at candidate's " picture-in-picture " edge in the time that former video occurs, detect the intensity of variation of the inner subgraph in this edge, detection method is the camera lens division methods (as mentioned below) of frame of video roughly the same.If it is very few to detect the variation of the inner subgraph in this edge, then abandon this candidate edge.
D) after video " picture-in-picture " detects and finishes, detection can be obtained " picture-in-picture " part and from original video, extract as new Video processing.
2) video lens extracts.The camera lens of video is divided also and can be finished for one time by handling video.Concrete steps are:
A) read the present frame of video, it is carried out gray processing and histogram equalization is handled, calculate the piecemeal 16 rank histogram vectors of its 4*4 then, and with result of calculation normalization.
B) the interframe similarity of calculating present frame and former frame, computing method are seen formula (1) (2) (3) (4).
C), then think to produce a new camera lens if the similarity that b) calculates is lower than a certain threshold value.
D) if the frame number that comprises in the camera lens of current generation is few and a certain threshold value, then it is merged in the previous camera lens.
E) if be limited to factors such as hardware condition or time, need to improve the speed of Shot Detection, can compare an interframe similarity every the number frame.
3) calculate the camera lens representative frame.For each camera lens that extracts from video calculates one or more camera lens representative frame.Computing method are seen formula (5) (6).If this camera lens is long, camera lens can be cut, and be that each height section is calculated a representative frame.
4) camera lens cluster.The number of shots to be processed in order to reduce is carried out cluster to the camera lens of same video inside, and camera lens is converted into camera lens bunch.Be used as similarity between the camera lens by calculating similarity between the camera lens representative frame.Clustering algorithm uses no supervise algorithm.
3, make up database index.After all Video processing in the database were finished, the camera lens that all clusters are obtained bunch was put together, utilizes the K-means clustering algorithm to carry out the iteration cluster, and utilizes cluster result to make up the K-means index tree.
4, database index can carry out video copy detection after making up and finishing.To carry out the target video of copy detection to each one, it is carried out each processing procedure in the step 2, finally obtain this video and comprise a plurality of camera lenses bunch.Bunch utilize the K-means tree of sample video library to search for to each camera lens.The K-means tree is an incomplete K fork tree.When searching for, the ground floor child node under the root node at first, with this camera lens bunch with compare with each child node, obtain the highest child node of similarity, next the straton node to this child node compares again.This process iteration is carried out, an end in reaching following condition:
A) find some child nodes, the camera lens that comprises in this child node bunch reaches certain threshold value with this camera lens bunch similarity, and think that target video comprises the video copy content this moment.
B) arrive the leaf node of tree, the similarity of the camera lens that comprises in this leaf node bunch and this camera lens bunch does not still arrive predetermined threshold, thinks that the video content of this camera lens bunch representative does not contain the video copy content at this moment.
If all camera lenses that target video comprises bunch do not contain the video copy content, then target video does not comprise the video copy content.

Claims (6)

1. be applicable to the video copying detection method of IPTV, it is characterized in that, comprise the steps:
A) for the video that detects in the database, at first carry out " picture-in-picture " and detect, carry out camera lens then and divide and cluster, and cluster result is utilized K-means tree index building;
B) when new video need detect, at first it is carried out " picture-in-picture " and detect, carry out camera lens then and divide and cluster, search in the database index that utilizes cluster result to make up in the step a);
C) output lookup result.
2. the described method of claim 1, it is characterized in that: it is by seeking those continual and steady appearance that described video " picture-in-picture " detects, and the I picture edge with " picture-in-picture " characteristics obtains the border and the position of " picture-in-picture ", and concrete steps comprise:
A) carry out rim detection to detecting each frame of video;
B) video is carried out windowing process: in length was the forms of L frame, the edge that the frequency of occurrences is surpassed L*r was designated as candidate's " picture-in-picture " edge; R is the threshold value factor;
C) shape facility at edge and the content change amplitude of inside, edge are selected in astronomical observation, if the edge content amplitude of variation of inside, candidate edge is too small, can think that then this edge is the normal frame shape part that occurs in the former video, and eliminating is possible of " picture-in-picture ".
3. the described method of claim 1 is characterized in that: described video lens divide and the step of cluster as follows:
A) utilize specific frame similarity calculating method to calculate similarity between the successive frame;
B) screen detecting camera lens, discard the very few false camera lens of those frame numbers, thereby avoid the influence of conversion special efficacy between the camera lens;
C) in order to improve detection efficiency, can adopt the frame-skipping detection method, promptly detect the method for a frame similarity every the number frame.
D) each camera lens to extracting calculates one or more camera lens representative frame, and utilizes similarity between the representative frame to calculate similarity between the camera lens.
4. the described method of claim 3, it is characterized in that: described specific frame similarity calculation procedure is as follows:
A) a certain frame of video is carried out after gray processing and the histogram equalization, be divided into the impartial subgraph of size of N*N, each subgraph is calculated its 16 rank grey level histogram, and with result of calculation normalization, then each frame of video can be expressed as N*N 16 dimension normalized vectors;
Each frame is expressed as:
I i = { v i ( 1 ) , v i ( 2 ) , v i ( 3 ) , . . . , v i ( N * N ) } - - - ( 1 )
Each subgraph is expressed as:
v ( k ) = < v 1 ( k ) , v 2 ( k ) , v 3 ( k ) , . . . , v 16 ( k ) > - - - ( 2 )
Calculating formula of similarity between two frames is:
s i , j k = v i ( k ) &CenterDot; v j ( k ) | | v i ( k ) | | | | v j ( k ) | | - - - ( 3 )
s i , j = &Sigma; k = 1 N * N ( s i , j k ) 2 / N 2 - - - ( 4 ) .
5. the described method of claim 3, it is characterized in that: camera lens representative frame extraction step is:
Obtain representative frame by a weighted mean calculating all frames in the camera lens:
A, establish each shut list and be shown:
S i={I 1,1 2,1 3,...,I k}(5)
The computing formula of representative frame is:
I i o = &Sigma; n = 1 K ( &omega; n I n ) / &Sigma; n = 1 K &omega; n - - - ( 6 )
B, weighted value is by normal distribution
Figure FSA00000111703000025
Obtain.
6. the described method of claim 1, it is characterized in that: described shot index construction step is:
A) by calculating similarity between the camera lens representative frame, utilize shot similarity that camera lens is carried out cluster, obtain a plurality of camera lenses bunch as the similarity between the camera lens;
B) utilize K-means tree to organize all camera lenses bunch; When making up the K-means index tree, calculate the similarity between the camera lens bunch, computing method are:
Each camera lens bunch is expressed as:
SC i = { S i ( 1 ) , S i ( 2 ) , S i ( 3 ) , . . . , S i ( M ) } - - - ( 8 )
The similarity of camera lens bunch is:
SCS i , j = max 0 < m < N , 0 < n < N { SS i , j ( m , n ) } - - - ( 9 ) .
CN 201010166974 2010-05-06 2010-05-06 Video copying detection method being suitable for IPTV Pending CN101872415A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010166974 CN101872415A (en) 2010-05-06 2010-05-06 Video copying detection method being suitable for IPTV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010166974 CN101872415A (en) 2010-05-06 2010-05-06 Video copying detection method being suitable for IPTV

Publications (1)

Publication Number Publication Date
CN101872415A true CN101872415A (en) 2010-10-27

Family

ID=42997270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010166974 Pending CN101872415A (en) 2010-05-06 2010-05-06 Video copying detection method being suitable for IPTV

Country Status (1)

Country Link
CN (1) CN101872415A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542576A (en) * 2010-12-15 2012-07-04 索尼公司 Image processing device, image processing method and program
CN103503469A (en) * 2011-05-06 2014-01-08 微软公司 Staged element classification
CN104539942A (en) * 2014-12-26 2015-04-22 赞奇科技发展有限公司 Video shot switching detection method and device based on frame difference cluster
CN105072455A (en) * 2015-08-11 2015-11-18 精硕世纪科技(北京)有限公司 Video matching method and device
CN105095858A (en) * 2015-06-26 2015-11-25 华为技术有限公司 Method and device for detecting embedded video
CN105612535A (en) * 2013-08-29 2016-05-25 匹斯奥特(以色列)有限公司 Efficient content-based video retrieval
CN105893463A (en) * 2016-03-23 2016-08-24 广州酷狗计算机科技有限公司 Album recording method and device
CN107480163A (en) * 2017-06-19 2017-12-15 西安电子科技大学 The efficient ciphertext image search method of secret protection is supported under a kind of cloud environment
CN108682017A (en) * 2018-04-11 2018-10-19 浙江工业大学 Super-pixel method for detecting image edge based on Node2Vec algorithms
CN110798736A (en) * 2019-11-28 2020-02-14 百度在线网络技术(北京)有限公司 Video playing method, device, equipment and medium
CN113014957A (en) * 2021-02-25 2021-06-22 北京市商汤科技开发有限公司 Video shot segmentation method and device, medium and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486827B2 (en) * 2005-01-21 2009-02-03 Seiko Epson Corporation Efficient and robust algorithm for video sequence matching
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486827B2 (en) * 2005-01-21 2009-02-03 Seiko Epson Corporation Efficient and robust algorithm for video sequence matching
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《中国图象图形学报》 20091130 潘岩等 适用于IPTV的视频拷贝检测系统 第2421-2424页 1-6 第14卷, 第11期 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542576A (en) * 2010-12-15 2012-07-04 索尼公司 Image processing device, image processing method and program
CN103503469A (en) * 2011-05-06 2014-01-08 微软公司 Staged element classification
CN103503469B (en) * 2011-05-06 2017-07-28 微软技术许可有限责任公司 The categorizing system of element stage by stage
CN105612535A (en) * 2013-08-29 2016-05-25 匹斯奥特(以色列)有限公司 Efficient content-based video retrieval
CN105612535B (en) * 2013-08-29 2019-11-12 匹斯奥特(以色列)有限公司 Efficient content based video retrieval system
CN104539942A (en) * 2014-12-26 2015-04-22 赞奇科技发展有限公司 Video shot switching detection method and device based on frame difference cluster
CN105095858A (en) * 2015-06-26 2015-11-25 华为技术有限公司 Method and device for detecting embedded video
CN105072455A (en) * 2015-08-11 2015-11-18 精硕世纪科技(北京)有限公司 Video matching method and device
CN105893463B (en) * 2016-03-23 2019-11-05 广州酷狗计算机科技有限公司 Album input method and device
CN105893463A (en) * 2016-03-23 2016-08-24 广州酷狗计算机科技有限公司 Album recording method and device
CN107480163A (en) * 2017-06-19 2017-12-15 西安电子科技大学 The efficient ciphertext image search method of secret protection is supported under a kind of cloud environment
CN107480163B (en) * 2017-06-19 2020-03-24 西安电子科技大学 Efficient ciphertext image retrieval method supporting privacy protection in cloud environment
CN108682017A (en) * 2018-04-11 2018-10-19 浙江工业大学 Super-pixel method for detecting image edge based on Node2Vec algorithms
CN108682017B (en) * 2018-04-11 2021-06-18 浙江工业大学 Node2Vec algorithm-based super-pixel image edge detection method
CN110798736A (en) * 2019-11-28 2020-02-14 百度在线网络技术(北京)有限公司 Video playing method, device, equipment and medium
CN113014957A (en) * 2021-02-25 2021-06-22 北京市商汤科技开发有限公司 Video shot segmentation method and device, medium and computer equipment

Similar Documents

Publication Publication Date Title
CN101872415A (en) Video copying detection method being suitable for IPTV
Chaudhuri et al. Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method
CN107665498B (en) Full convolution network aircraft detection method based on typical example mining
CN107169106B (en) Video retrieval method, device, storage medium and processor
CA2753978C (en) Clustering videos by location
US9626585B2 (en) Composition modeling for photo retrieval through geometric image segmentation
CN103530638B (en) Method for pedestrian matching under multi-cam
CN111242221B (en) Image matching method, system and storage medium based on image matching
CN102254006A (en) Method for retrieving Internet video based on contents
CN103297851A (en) Method and device for quickly counting and automatically examining and verifying target contents in long video
CN113112519B (en) Key frame screening method based on interested target distribution
CN102156686B (en) Method for detecting specific contained semantics of video based on grouped multi-instance learning model
CN105608233A (en) Video copy detection method based on improved OM features
CN104036280A (en) Video fingerprinting method based on region of interest and cluster combination
JP2013016168A (en) Method and device for positioning text area in image
CN105139031A (en) Data processing method based on subspace clustering
CN102495887B (en) Video lens partitioning method based on color matrixes of key regions and application thereof
CN110267101A (en) A kind of unmanned plane video based on quick three-dimensional picture mosaic takes out frame method automatically
CN110751027A (en) Pedestrian re-identification method based on deep multi-instance learning
CN111222546B (en) Multi-scale fusion food image classification model training and image classification method
CN113920148B (en) Building boundary extraction method and equipment based on polygon and storage medium
CN105825201A (en) Moving object tracking method in video monitoring
CN108966042B (en) Video abstract generation method and device based on shortest path
CN106611030A (en) Object similarity comparison method and object search method based on video, and object similarity comparison system and object search system based on video
CN109359530B (en) Intelligent video monitoring method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20101027