CN103400155A - Pornographic video detection method based on semi-supervised learning of images - Google Patents
Pornographic video detection method based on semi-supervised learning of images Download PDFInfo
- Publication number
- CN103400155A CN103400155A CN2013102700189A CN201310270018A CN103400155A CN 103400155 A CN103400155 A CN 103400155A CN 2013102700189 A CN2013102700189 A CN 2013102700189A CN 201310270018 A CN201310270018 A CN 201310270018A CN 103400155 A CN103400155 A CN 103400155A
- Authority
- CN
- China
- Prior art keywords
- piecemeal
- frame
- key
- semi
- supervised learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention relates to a pornographic video detection method based on semi-supervised learning of images. The method comprises the steps of first carrying out shot segmentation on a video, acquiring a key frame in a shot, carrying out inter-frame difference on the key frame and an adjacent multi-frame image, extracting a partial motion foreground area, wherein the key frame is an intermediate frame in the shot, then using the extracted area as prior information for acquiring a real foreground area, extracting a complete foreground area by adopting the method based on semi-supervised learning of images and segmenting a skin color area in the complete foreground area, identifying harmful key frames based on characteristics of the skin color area, and carrying out judgment on harmful video content. According to the invention, the partial motion foreground area is acquired by the aid of an inter-frame difference method, the area is used as the prior information, the complete foreground area is extracted by adopting the method based on semi-supervised learning of images, and then carrying out pornographic content detection on the extracted foreground area.
Description
Technical field
The invention belongs to computer application field, relate to pornographic video and detect, particularly a kind of pornographic video detecting method based on the figure semi-supervised learning.
Background technology
Be accompanied by the develop rapidly of internet and multimedia technology, in network, video media is ubiquitous, becomes our life ﹠ amusement important component part.Yet wherein be full of a large amount of pornographic videos, life has caused extremely serious negative effect to Social Culture.How effectively to stop the propagation of pornographic video on network to be significant to safeguarding teen-age physical and mental health and purifying the network video resource.
In the past few years, pornographic video detecting method continues to bring out.These methods are divided three classes: the first kind is the key frame that extracts video, key frame is carried out Pornograph detect; Equations of The Second Kind is the audio frequency that extracts in video, and audio content is identified; The 3rd class is obtained the motion vector of video, and motion feature is analyzed.But because the difference of pornographic video with respect to the normal video maximum is: contain a large amount of exposed skins in pornographic frame of video.Therefore, with other two class methods, compare, first kind method has higher accuracy.Just adopt and extract a plurality of key frames in video in patent 201010568307, key frame is carried out Pornograph detect,, according to the testing result to single key frame Pornograph,, together with the testing result of other key frames that obtain before, carry out Decision fusion and judge.But owing to containing the background similar to the colour of skin in image, not accurate enough to the key frame testing result.The people such as Lv Li just have continuity for video on this basis, have proposed first to extract the key frame of video, and the adjacent follow-up multiframe of recycling key frame, extract foreground area, foreground area is carried out Pornograph detect.Due to the background complexity and short time internal object motion not obvious, extract foreground area imperfect, also make testing result not accurate enough.
Summary of the invention
In order to overcome the shortcoming of above-mentioned prior art, the object of the present invention is to provide a kind of pornographic video detecting method based on the figure semi-supervised learning, because foreground area color in pornographic video has larger similarity, therefore can obtain complete foreground area by the part foreground information that extracts, the present invention gets the componental movement foreground area by the frame-to-frame differences method, should zone as prior imformation, employing extracts complete foreground area based on the learning method of partly supervising of figure, then the foreground area of extracting is carried out Pornograph detect.
To achieve these goals, the technical solution used in the present invention is:
a kind of pornographic video detecting method based on the figure semi-supervised learning, at first video being carried out camera lens cuts apart, obtain the key frame in camera lens, key frame and adjacent multiple image are carried out inter-frame difference, the sport foreground zone of Extraction parts, wherein key frame is the intermediate frame in camera lens, then use the zone of this extraction as the prior imformation of obtaining the true foreground zone, employing extracts complete foreground area based on the semi-supervised learning method of figure and is partitioned into wherein area of skin color, the feature in last skin color based zone is identified bad key frame, and bad video content is judged.
The sport foreground method for extracting region of described part is as follows:
A) camera lens is cut apart
Calculate the color histogram difference value of adjacent video frames in the RGB color space, use the dual threshold value algorithm to detect video shot boundary;
B) inter-frame difference
Key frame and adjacent front and back l two field picture are carried out respectively inter-frame difference, and the threshold value T of using carries out binary conversion treatment as thresholding to the poor image of frame, obtains bianry image D
aAnd D
b, the frame-to-frame differences method is obtained out prospect and the background binary image D of key frame
k=D
a∩ D
b
C) morphologic filtering
Adopt morphological dilations and fuzzy operation to eliminate discrete noise point, fill the cavity in connected region, obtain foreground area and be designated as β.
The color histogram difference value computing formula of described adjacent video frames is
Wherein Z represents interframe color histogram difference value, and M is the pixel count of this two field picture, and n is the color interval number, n=32,
Represent in the k frame that i color component drops on the pixel count between the j chromatic zones, i ∈ { R, G, B};
As Z 〉=T
l, judge camera lens generation shear, be partitioned into shot boundary; Work as T
h<Z<T
lJudge camera lens generation gradual change, the cumulative after this difference value of interframe color histogram, until accumulated value reaches T
lThe time judge camera lens generation shear, be partitioned into shot boundary, wherein T
l, T
hFor setting threshold, and T
lT
h
Described key frame and its front l two field picture difference, form bianry image D
b, computing formula is as follows:
Described key frame and its rear l two field picture difference, form bianry image D
a, computing formula is as follows:
The described method that extracts complete foreground area based on the upper semi-supervised learning method of figure is as follows:
A) key frame in camera lens is divided into M * N piecemeal by grid, extracts the color characteristic of each piecemeal, with piecemeal as node design of graphics Φ;
Wherein contain 30 * 30 pixels in each piecemeal of M * N piecemeal, M=h/30, N=w/30 (h represents the height of key frame, and w represents the width of key frame).
B) the weight matrix W on limit in calculating chart Φ, and W is carried out normalization try to achieve matrix S;
C) according to the initial labels value of the sport foreground region beta mark piecemeal that obtains, prospect piecemeal label value is 1, and other piecemeal label value is 0, represents that this piece classification is uncertain, forms the initial labels vector f
0
D) utilize iteration the piecemeal label value to be delivered to contiguous piecemeal gradually, iterative formula: f
m+1=aSf
m+ (1-a) f
0, wherein
Be the piecemeal label vector after m time is propagated, a is constant, and 0<a<1;
E) calculate classification thresholds, the uncertain piecemeal of classification is classified, obtain complete foreground area ξ.
The computing method of described weight matrix W are:
If vector x
iWith x
jThe piecemeal of expression is connected:
(i ≠ j, σ are constant), otherwise w
ij=0; , in order to prevent self similarity, set w
ii=0; Matrix W is carried out normalization,
D is diagonal matrix
The expression label is the color feature vector of i piecemeal, x
jThe expression label is the color feature vector of j piecemeal, and d (i, j) is vector x
iWith x
jThe cosine distance,
The computing method of described classification thresholds are:
Vector f
*=(f
1, f
2..., f
M * N), f wherein
iI piecemeal label value.
With vector f
*Middle data are aligned in array Γ in an orderly manner,, to adjacent data difference in Γ, obtain adjacent two label values of difference maximum, and the mean value δ that calculates these two label values is described classification thresholds.
The described method that the uncertain piecemeal of classification is classified is as follows: work as f
iDuring 〉=δ, x
iCorresponding piecemeal is judged to be the prospect piecemeal, works as f
iDuring<δ, corresponding piecemeal is judged to be the background piecemeal, obtains thus complete foreground area ξ.
The method that the feature in described skin color based zone is identified bad key frame is:
Use Gauss's complexion model to identify area of skin color to complete foreground area, calculate the ratio that area of skin color accounts for picture, according to this ratio, the key frame picture is judged, identify pornographic camera lens,, when a continuous P camera lens is detected as pornographic camera lens, judge that this video is pornographic video.P=5 for example.
Compared with prior art, the pornographic video identification algorithm of the semi-supervised learning based on figure in this paper can extract foreground area complete in video lens. and foreground area is carried out Face Detection, can effectively avoid the interference near the background of the colour of skin.Obtain testing result accurately.
Embodiment
Describe embodiments of the present invention in detail below in conjunction with embodiment.
Based on the pornographic video detecting method of figure semi-supervised learning, step is as follows:
Step 1, the foreground area of based on motion information is extracted, and step is as follows:
A) detect shot boundary in video based on the dual threshold value algorithm.
Extract the histogram information of R in frame of video, G, three components of B, calculate the histogrammic difference value of consecutive frame.Computing formula is as follows:
Wherein Z represents interframe color histogram difference value.M is the pixel count of this two field picture, and n is color interval number (n=32),
Represent in the k frame that i color component drops on the pixel count between the j chromatic zones, i ∈ { R, G, B}.
As Z 〉=T
l, judge camera lens generation shear, be partitioned into shot boundary; Work as T
h<Z<T
lJudge camera lens generation gradual change, the cumulative after this difference value of interframe color histogram, until accumulated value reaches T
lThe time judge camera lens generation shear, be partitioned into shot boundary (T wherein
l, T
hFor setting threshold, and T
lT
h).
B) frame differential method extracts sport foreground zone in camera lens.
With intermediate frame in camera lens as key frame.Adopt key frame front and back adjacent with it l two field picture to carry out respectively inter-frame difference, the threshold value T of using carries out binary conversion treatment as thresholding to frame difference image.Computing formula is as follows:
(1) key frame and its front l two field picture difference, form bianry image D
bFormula is as follows:
(2) it and rear l two field picture difference of key frame, form bianry image D
aFormula is as follows:
Key frame prospect and background binary image D that frame difference method is extracted
k=D
a∩ D
b, image is carried out eliminating moving target with operation move the mistake of generation, improve the robustness of system.
C) morphologic filtering
Frame differential method obtains bianry image D
kThe middle connected region and some isolated, discrete noise points that exists inside that cavity is arranged.To D
kFirst carry out opening operation and abate the noise a little, then carry out closed operation cavity in connected region is filled.Obtain the foreground area β of motion.
Step 2, the semi-supervised learning method of utilization figure is obtained complete foreground area.
It is imperfect that inter-frame difference obtains foreground area, is the part in true foreground zone usually.Region beta, as prior imformation, is used and obtained complete foreground area ξ based on the semi-supervised learning method of figure.Carry out as follows:
A) key frame in camera lens is divided into M * N piecemeal by grid, extracts the color characteristic of each piecemeal, the blocking characteristic information table is shown matrix X={x
1, x
2..., x
n, vector x wherein
iThe expression label is the color feature vector of i piecemeal.Use these vectors as node design of graphics Φ.
B) the weight matrix W on limit in calculating chart Φ.If vector x
iWith x
jThe piecemeal of expression is connected:
(i ≠ j, σ are constant), otherwise w
ij=0; , in order to prevent self similarity, set w
ii=0.Matrix W is carried out normalization,
D is diagonal matrix
Wherein d (i, j) is vector x
iWith x
jThe cosine distance:
C) the initial category label of mark piecemeal.Foreground pixel point in statistics β, if in corresponding piecemeal, the motor image vegetarian refreshments is more than the still image vegetarian refreshments, this piecemeal belongs to foreground blocks, the class label value is 1; Other piecemeal class label value is 0, represents that this piece classification is uncertain.Form initial category label vector f
0
D) transmission of label value.Utilize the continuous iteration of following formula that the piecemeal label value is passed to contiguous piecemeal.
f
m+1=aSf
m+(1-a)f
0(5)
Formula (5) finally converge to as shown in the formula:
f
*=(1-a)(I-aS)
-1f
0(6)
Vector f
*=(f
1, f
2..., f
M * N), f wherein
iI piecemeal label value.
E) determine not mark the class label of piecemeal.
With vector f
*Middle data are aligned in array Γ in an orderly manner,, to adjacent data difference in Γ, obtain adjacent two label values of difference maximum.Calculate the mean value δ of these two label values.The classification that does not mark piecemeal determines that method is as follows: work as f
iDuring 〉=δ, x
iCorresponding piecemeal is judged to be the prospect piecemeal.Work as f
iDuring<δ, corresponding piecemeal is judged to be the background piecemeal.Usually there is larger difference usually in prospect and the background in frame of video, and selected threshold value δ exactly can divide into the piecemeal in picture frame two parts that differ greatly.Can obtain complete foreground area ξ by above step.
Step 4: complete foreground area is carried out area of skin color cut apart and identify
A) build Gauss's complexion model.
I. given broca scale is as training set, and all skin pixel points are Y={y
1, y
2... y
n, y wherein
i={ R
i, G
i, B
iIt is the rgb value of i the skin pixel point.
Iii. build the Gauss's complexion model based on the RGB color space, formula is as follows:
B) use Gauss's complexion model to be partitioned into area of skin color in foreground area ξ, calculate the area of skin color area and account for the ratio of image area.Judge that the key frame picture is porny when ratio during greater than predefined threshold value, and judge that this camera lens is pornographic camera lens.
C) in pornographic video, most of camera lens is pornographic camera lens, continuous a plurality of pornographic camera lens will inevitably occur.Based on this, when a continuous P camera lens is detected as pornographic camera lens.This video is determined pornographic video.
Claims (10)
1. pornographic video detecting method based on the figure semi-supervised learning, it is characterized in that, at first video being carried out camera lens cuts apart, obtain the key frame in camera lens, key frame and adjacent multiple image are carried out inter-frame difference, the sport foreground zone of Extraction parts, wherein key frame is the intermediate frame in camera lens, then use the zone of this extraction as the prior imformation of obtaining the true foreground zone, employing extracts complete foreground area based on the semi-supervised learning method of figure and is partitioned into wherein area of skin color, the feature in last skin color based zone is identified bad key frame, and bad video content is judged.
2. the pornographic video detecting method based on the figure semi-supervised learning according to claim 1, is characterized in that, the sport foreground method for extracting region of described part is as follows:
A) camera lens is cut apart
Calculate the color histogram difference value of adjacent video frames in the RGB color space, use the dual threshold value algorithm to detect video shot boundary;
B) inter-frame difference
Key frame and adjacent front and back l two field picture are carried out respectively inter-frame difference, and the threshold value T of using carries out binary conversion treatment as thresholding to the poor image of frame, obtains bianry image D
aAnd D
b, the frame-to-frame differences method is obtained out prospect and the background binary image D of key frame
k=D
a∩ D
b
C) morphologic filtering
Adopt morphological dilations and fuzzy operation to eliminate discrete noise point, fill the cavity in connected region, obtain foreground area and be designated as β.
3. the pornographic video detecting method based on the figure semi-supervised learning according to claim 2, is characterized in that, the color histogram difference value computing formula of described adjacent video frames is
Wherein Z represents interframe color histogram difference value, and M is the pixel count of this two field picture, and n is the color interval number, n=32,
Represent in the k frame that i color component drops on the pixel count between the j chromatic zones, i ∈ { R, G, B};
As Z 〉=T
l, judge camera lens generation shear, be partitioned into shot boundary; Work as T
h<Z<T
lJudge camera lens generation gradual change, the cumulative after this difference value of interframe color histogram, until accumulated value reaches T
lThe time judge camera lens generation shear, be partitioned into shot boundary, wherein T
l, T
hFor setting threshold, and T
lT
h
4. the pornographic video detecting method based on the figure semi-supervised learning according to claim 2, is characterized in that, described key frame and its front l two field picture difference, form bianry image D
b, computing formula is as follows:
Described key frame and its rear l two field picture difference, form bianry image D
a, computing formula is as follows:
5. the pornographic video detecting method based on the figure semi-supervised learning according to claim 2, is characterized in that, the described method that extracts complete foreground area based on the upper semi-supervised learning method of figure is as follows:
A) key frame in camera lens is divided into M * N piecemeal by grid, extracts the color characteristic of each piecemeal, with piecemeal as node design of graphics Φ;
B) the weight matrix W on limit in calculating chart Φ, and W is carried out normalization try to achieve matrix S;
C) according to the initial labels value of the sport foreground region beta mark piecemeal that obtains, prospect piecemeal label value is 1, and other piecemeal label value is 0, represents that this piece classification is uncertain, forms the initial labels vector f
0
D) utilize iteration the piecemeal label value to be delivered to contiguous piecemeal gradually, iterative formula: f
m+1=aSf
m+ (1-a) f
0, wherein
Be the piecemeal label vector after m time is propagated, a is constant, and 0<a<1;
E) calculate classification thresholds, the uncertain piecemeal of classification is classified, obtain complete foreground area ξ.
6. the pornographic video detecting method based on the figure semi-supervised learning according to claim 5, is characterized in that, each piecemeal in described M * N piecemeal contains 30 * 30 pixels, M=h/30, N=w/30, h represent the height of key frame, and w represents the width of key frame.
7. the pornographic video detecting method based on the figure semi-supervised learning according to claim 5, is characterized in that, the computing method of described weight matrix W are:
If vector x
iWith x
jThe piecemeal of expression is connected:
(i ≠ j, σ are constant), otherwise w
ij=0; , in order to prevent self similarity, set w
ii=0; Matrix W is carried out normalization,
D is diagonal matrix
The expression label is the color feature vector of i piecemeal, x
jThe expression label is the color feature vector of j piecemeal, and d (i, j) is vector x
iWith x
jThe cosine distance,
8. the pornographic video detecting method based on the figure semi-supervised learning according to claim 5, is characterized in that, the computing method of described classification thresholds are:
Vector f
*=(f
1, f
2..., f
M * N), f wherein
iI piecemeal label value.
With vector f
*Middle data are aligned in array Γ in an orderly manner,, to adjacent data difference in Γ, obtain adjacent two label values of difference maximum, and the mean value δ that calculates these two label values is described classification thresholds.
9. the pornographic video detecting method based on the figure semi-supervised learning according to claim 8, is characterized in that, the described method that the uncertain piecemeal of classification is classified is as follows: work as f
iDuring 〉=δ, x
iCorresponding piecemeal is judged to be the prospect piecemeal, works as f
iDuring<δ, corresponding piecemeal is judged to be the background piecemeal, obtains thus complete foreground area ξ.
10. the pornographic video detecting method based on the figure semi-supervised learning according to claim 2, is characterized in that, the described method that the uncertain piecemeal of classification is classified is as follows: work as f
iDuring 〉=δ, x
i
The method that the feature in described skin color based zone is identified bad key frame is:
Use Gauss's complexion model to identify area of skin color to complete foreground area, calculate the ratio that area of skin color accounts for picture, according to this ratio, the key frame picture is judged, identify pornographic camera lens,, when a continuous P camera lens is detected as pornographic camera lens, judge that this video is pornographic video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013102700189A CN103400155A (en) | 2013-06-28 | 2013-06-28 | Pornographic video detection method based on semi-supervised learning of images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013102700189A CN103400155A (en) | 2013-06-28 | 2013-06-28 | Pornographic video detection method based on semi-supervised learning of images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103400155A true CN103400155A (en) | 2013-11-20 |
Family
ID=49563773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013102700189A Pending CN103400155A (en) | 2013-06-28 | 2013-06-28 | Pornographic video detection method based on semi-supervised learning of images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103400155A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104951742A (en) * | 2015-03-02 | 2015-09-30 | 北京奇艺世纪科技有限公司 | Detection method and system for sensitive video |
CN105389558A (en) * | 2015-11-10 | 2016-03-09 | 中国人民解放军信息工程大学 | Method and apparatus for detecting video |
CN106101740A (en) * | 2016-07-13 | 2016-11-09 | 百度在线网络技术(北京)有限公司 | A kind of video content recognition method and apparatus |
CN107358141A (en) * | 2016-05-10 | 2017-11-17 | 阿里巴巴集团控股有限公司 | The method and device of data identification |
CN108063979A (en) * | 2017-12-26 | 2018-05-22 | 深圳Tcl新技术有限公司 | Video playing control method, device and computer readable storage medium |
WO2020052270A1 (en) * | 2018-09-14 | 2020-03-19 | 华为技术有限公司 | Video review method and apparatus, and device |
CN111008978A (en) * | 2019-12-06 | 2020-04-14 | 电子科技大学 | Video scene segmentation method based on deep learning |
CN115988229A (en) * | 2022-11-16 | 2023-04-18 | 阿里云计算有限公司 | Image identification method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101493887A (en) * | 2009-03-06 | 2009-07-29 | 北京工业大学 | Eyebrow image segmentation method based on semi-supervision learning and Hash index |
US8358837B2 (en) * | 2008-05-01 | 2013-01-22 | Yahoo! Inc. | Apparatus and methods for detecting adult videos |
CN102930553A (en) * | 2011-08-10 | 2013-02-13 | 中国移动通信集团上海有限公司 | Method and device for identifying objectionable video content |
CN103034851A (en) * | 2012-12-24 | 2013-04-10 | 清华大学深圳研究生院 | Device and method of self-learning skin-color model based hand portion tracking |
-
2013
- 2013-06-28 CN CN2013102700189A patent/CN103400155A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8358837B2 (en) * | 2008-05-01 | 2013-01-22 | Yahoo! Inc. | Apparatus and methods for detecting adult videos |
CN101493887A (en) * | 2009-03-06 | 2009-07-29 | 北京工业大学 | Eyebrow image segmentation method based on semi-supervision learning and Hash index |
CN102930553A (en) * | 2011-08-10 | 2013-02-13 | 中国移动通信集团上海有限公司 | Method and device for identifying objectionable video content |
CN103034851A (en) * | 2012-12-24 | 2013-04-10 | 清华大学深圳研究生院 | Device and method of self-learning skin-color model based hand portion tracking |
Non-Patent Citations (1)
Title |
---|
郭阿弟等: "不良视频检测系统的设计与实现", 《中国科技论文在线》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104951742A (en) * | 2015-03-02 | 2015-09-30 | 北京奇艺世纪科技有限公司 | Detection method and system for sensitive video |
CN104951742B (en) * | 2015-03-02 | 2018-06-22 | 北京奇艺世纪科技有限公司 | The detection method and system of objectionable video |
CN105389558A (en) * | 2015-11-10 | 2016-03-09 | 中国人民解放军信息工程大学 | Method and apparatus for detecting video |
CN107358141A (en) * | 2016-05-10 | 2017-11-17 | 阿里巴巴集团控股有限公司 | The method and device of data identification |
CN107358141B (en) * | 2016-05-10 | 2020-10-23 | 阿里巴巴集团控股有限公司 | Data identification method and device |
CN106101740A (en) * | 2016-07-13 | 2016-11-09 | 百度在线网络技术(北京)有限公司 | A kind of video content recognition method and apparatus |
CN106101740B (en) * | 2016-07-13 | 2019-12-24 | 百度在线网络技术(北京)有限公司 | Video content identification method and device |
CN108063979A (en) * | 2017-12-26 | 2018-05-22 | 深圳Tcl新技术有限公司 | Video playing control method, device and computer readable storage medium |
WO2020052270A1 (en) * | 2018-09-14 | 2020-03-19 | 华为技术有限公司 | Video review method and apparatus, and device |
CN110913243A (en) * | 2018-09-14 | 2020-03-24 | 华为技术有限公司 | Video auditing method, device and equipment |
CN111008978A (en) * | 2019-12-06 | 2020-04-14 | 电子科技大学 | Video scene segmentation method based on deep learning |
CN115988229A (en) * | 2022-11-16 | 2023-04-18 | 阿里云计算有限公司 | Image identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Saliency-aware geodesic video object segmentation | |
CN103400155A (en) | Pornographic video detection method based on semi-supervised learning of images | |
US20180247126A1 (en) | Method and system for detecting and segmenting primary video objects with neighborhood reversibility | |
CN102867188B (en) | Method for detecting seat state in meeting place based on cascade structure | |
CN108647649B (en) | Method for detecting abnormal behaviors in video | |
CN104156942B (en) | Detection method for remnants in complex environment | |
CN103227963A (en) | Static surveillance video abstraction method based on video moving target detection and tracing | |
Khare et al. | A new histogram oriented moments descriptor for multi-oriented moving text detection in video | |
CN110097115B (en) | Video salient object detection method based on attention transfer mechanism | |
CN110276264B (en) | Crowd density estimation method based on foreground segmentation graph | |
Avgerinakis et al. | Recognition of activities of daily living for smart home environments | |
CN108665481A (en) | Multilayer depth characteristic fusion it is adaptive resist block infrared object tracking method | |
CN105224912A (en) | Based on the video pedestrian detection and tracking method of movable information and Track association | |
CN106446015A (en) | Video content access prediction and recommendation method based on user behavior preference | |
CN104050471A (en) | Natural scene character detection method and system | |
CN110827312B (en) | Learning method based on cooperative visual attention neural network | |
Xu et al. | Video saliency detection via graph clustering with motion energy and spatiotemporal objectness | |
CN103942794A (en) | Image collaborative cutout method based on confidence level | |
CN104166983A (en) | Motion object real time extraction method of Vibe improvement algorithm based on combination of graph cut | |
CN107066963B (en) | A kind of adaptive people counting method | |
CN103955949A (en) | Moving target detection method based on Mean-shift algorithm | |
Khorrami et al. | Multiple animal species detection using robust principal component analysis and large displacement optical flow | |
CN106127812A (en) | A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring | |
CN103985130A (en) | Image significance analysis method for complex texture images | |
CN105512618A (en) | Video tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20131120 |